WorldWideScience

Sample records for synthetic vision systems

  1. Database Integrity Monitoring for Synthetic Vision Systems Using Machine Vision and SHADE

    Science.gov (United States)

    Cooper, Eric G.; Young, Steven D.

    2005-01-01

    In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.

  2. System for synthetic vision and augmented reality in future flight decks

    Science.gov (United States)

    Behringer, Reinhold; Tam, Clement K.; McGee, Joshua H.; Sundareswaran, Venkataraman; Vassiliou, Marius S.

    2000-06-01

    Rockwell Science Center is investigating novel human-computer interface techniques for enhancing the situational awareness in future flight decks. One aspect is to provide intuitive displays which provide the vital information and the spatial awareness by augmenting the real world with an overlay of relevant information registered to the real world. Such Augmented Reality (AR) techniques can be employed during bad weather scenarios to permit flying in Visual Flight Rules (VFR) in conditions which would normally require Instrumental Flight Rules (IFR). These systems could easily be implemented on heads-up displays (HUD). The advantage of AR systems vs. purely synthetic vision (SV) systems is that the pilot can relate the information overlay to real objects in the world, whereas SV systems provide a constant virtual view, where inconsistencies can hardly be detected. The development of components for such a system led to a demonstrator implemented on a PC. A camera grabs video images which are overlaid with registered information, Orientation of the camera is obtained from an inclinometer and a magnetometer, position is acquired from GPS. In a possible implementation in an airplane, the on-board attitude information can be used for obtaining correct registration. If visibility is sufficient, computer vision modules can be used to fine-tune the registration by matching visual clues with database features. Such technology would be especially useful for landing approaches. The current demonstrator provides a frame-rate of 15 fps, using a live video feed as background and an overlay of avionics symbology in the foreground. In addition, terrain rendering from a 1 arc sec. digital elevation model database can be overlaid to provide synthetic vision in case of limited visibility. For true outdoor testing (on ground level), the system has been implemented on a wearable computer.

  3. Design of a perspective flight guidance display for a synthetic vision system

    Science.gov (United States)

    Gross, Martin; Mayer, Udo; Kaufhold, Rainer

    1998-07-01

    Adverse weather conditions affect flight safety as well as productivity of the air traffic industry. The problem becomes evident in the airport area (Taxiing, takeoff, approach and landing). The productivity of the air traffic industry goes down because the resources of the airport can not be used optimally. Canceled and delayed flights lead directly to additional costs for the airlines. Against the background of aggravated problems due to a predicted increasing air traffic the European Union launched the project AWARD (All Weather ARrival and Departure) in June 1996. Eleven European aerospace companies and research institutions are participating. The project will be finished by the end of 1999. Subject of AWARD is the development of a Synthetic Vision System (based on database and navigation) and an Enhanced Vision System (based on sensors like FLIR and MMWR). Darmstadt University of Technology is responsible for the development of the SVS prototype. The SVS application is depending on precise navigation, databases for terrain and flight relevant information, and a flight guidance display. The objective is to allow landings under CAT III a/b conditions independently from CAT III ILS airport installations. One goal of SVS is to enhance the situation awareness of pilots during all airport area operations by designing an appropriate man-machine- interface for the display. This paper describes the current state of the research and development of the Synthetic Vision System being developed in AWARD. The paper describes which methodology was used to identify the information that should be displayed. Human factors which influenced the basic design of the SVS are portrayed and some of the planned activities for the flight simulation tests are summarized.

  4. Transition of Attention in Terminal Area NextGen Operations Using Synthetic Vision Systems

    Science.gov (United States)

    Ellis, Kyle K. E.; Kramer, Lynda J.; Shelton, Kevin J.; Arthur, Shelton, J. J., III; Prinzel, Lance J., III; Norman, Robert M.

    2011-01-01

    This experiment investigates the capability of Synthetic Vision Systems (SVS) to provide significant situation awareness in terminal area operations, specifically in low visibility conditions. The use of a Head-Up Display (HUD) and Head-Down Displays (HDD) with SVS is contrasted to baseline standard head down displays in terms of induced workload and pilot behavior in 1400 RVR visibility levels. Variances across performance and pilot behavior were reviewed for acceptability when using HUD or HDD with SVS under reduced minimums to acquire the necessary visual components to continue to land. The data suggest superior performance for HUD implementations. Improved attentional behavior is also suggested for HDD implementations of SVS for low-visibility approach and landing operations.

  5. Synthetic and Enhanced Vision Systems for NextGen (SEVS) Simulation and Flight Test Performance Evaluation

    Science.gov (United States)

    Shelton, Kevin J.; Kramer, Lynda J.; Ellis,Kyle K.; Rehfeld, Sherri A.

    2012-01-01

    The Synthetic and Enhanced Vision Systems for NextGen (SEVS) simulation and flight tests are jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA). The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SEVS operational and system-level performance capabilities. Nine test flights (38 flight hours) were conducted over the summer and fall of 2011. The evaluations were flown in Gulfstream.s G450 flight test aircraft outfitted with the SEVS technology under very low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 ft to 2400 ft visibility) into various airports from Louisiana to Maine. In-situ flight performance and subjective workload and acceptability data were collected in collaboration with ground simulation studies at LaRC.s Research Flight Deck simulator.

  6. Augmentation of Cognition and Perception Through Advanced Synthetic Vision Technology

    Science.gov (United States)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.; Arthur, Jarvis J.; Williams, Steve P.; McNabb, Jennifer

    2005-01-01

    Synthetic Vision System technology augments reality and creates a virtual visual meteorological condition that extends a pilot's cognitive and perceptual capabilities during flight operations when outside visibility is restricted. The paper describes the NASA Synthetic Vision System for commercial aviation with an emphasis on how the technology achieves Augmented Cognition objectives.

  7. Synthetic vision and memory for autonomous virtual humans

    OpenAIRE

    PETERS, CHRISTOPHER; O'SULLIVAN, CAROL ANN

    2002-01-01

    PUBLISHED A memory model based on ?stage theory?, an influential concept of memory from the field of cognitive psychology, is presented for application to autonomous virtual humans. The virtual human senses external stimuli through a synthetic vision system. The vision system incorporates multiple modes of vision in order to accommodate a perceptual attention approach. The memory model is used to store perceived and attended object information at different stages in a filtering...

  8. Using X-band Weather Radar Measurements to Monitor the Integrity of Digital Elevation Models for Synthetic Vision Systems

    Science.gov (United States)

    Young, Steve; UijtdeHaag, Maarten; Sayre, Jonathon

    2003-01-01

    Synthetic Vision Systems (SVS) provide pilots with displays of stored geo-spatial data representing terrain, obstacles, and cultural features. As comprehensive validation is impractical, these databases typically have no quantifiable level of integrity. Further, updates to the databases may not be provided as changes occur. These issues limit the certification level and constrain the operational context of SVS for civil aviation. Previous work demonstrated the feasibility of using a realtime monitor to bound the integrity of Digital Elevation Models (DEMs) by using radar altimeter measurements during flight. This paper describes an extension of this concept to include X-band Weather Radar (WxR) measurements. This enables the monitor to detect additional classes of DEM errors and to reduce the exposure time associated with integrity threats. Feature extraction techniques are used along with a statistical assessment of similarity measures between the sensed and stored features that are detected. Recent flight-testing in the area around the Juneau, Alaska Airport (JNU) has resulted in a comprehensive set of sensor data that is being used to assess the feasibility of the proposed monitor technology. Initial results of this assessment are presented.

  9. Biomimetic machine vision system.

    Science.gov (United States)

    Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2005-01-01

    Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.

  10. Synthetic vision and memory model for virtual human - biomed 2010.

    Science.gov (United States)

    Zhao, Yue; Kang, Jinsheng; Wright, David

    2010-01-01

    This paper describes the methods and case studies of a novel synthetic vision and memory model for virtual human. The synthetic vision module simulates the biological / optical abilities and limitations of the human vision. The module is based on a series of collision detection between the boundary of virtual humans field of vision (FOV) volume and the surface of objects in a recreated 3D environment. The memory module simulates a short-term memory capability by employing a simplified memory structure (first-in-first-out stack). The synthetic vision and memory model has been integrated into a virtual human modelling project, Intelligent Virtual Modelling. The project aimed to improve the realism and autonomy of virtual humans.

  11. The Effect of Synthetic Vision Enhancements on Landing Flare Performance

    NARCIS (Netherlands)

    Le Ngoc, L.; Borst, C.; Mulder, M.; Van Paassen, M.M.

    2010-01-01

    The usage of heads-down, non-conformal synthetic vision displays for landings below minimums has inherent problems during the flare due to minification effects. Literature showed that pilots can use four visual cues to perform a manual flare maneuver. Amongst their strategies, the Jacobson flare

  12. [Quality system Vision 2000].

    Science.gov (United States)

    Pasini, Evasio; Pitocchi, Oreste; de Luca, Italo; Ferrari, Roberto

    2002-12-01

    A recent document of the Italian Ministry of Health points out that all structures which provide services to the National Health System should implement a Quality System according to the ISO 9000 standards. Vision 2000 is the new version of the ISO standard. Vision 2000 is less bureaucratic than the old version. The specific requests of the Vision 2000 are: a) to identify, to monitor and to analyze the processes of the structure, b) to measure the results of the processes so as to ensure that they are effective, d) to implement actions necessary to achieve the planned results and the continual improvement of these processes, e) to identify customer requests and to measure customer satisfaction. Specific attention should be also dedicated to the competence and training of the personnel involved in the processes. The principles of the Vision 2000 agree with the principles of total quality management. The present article illustrates the Vision 2000 standard and provides practical examples of the implementation of this standard in cardiological departments.

  13. FMS flight plans in synthetic vision primary flight displays

    Science.gov (United States)

    He, Gang; Feyereisen, Thea; Wyatt, Sandy

    2009-05-01

    This paper describes display concepts and flight tests evaluations of flight management system (FMS) flight plan integration into Honeywell's synthetic vision (SV) integrated primary flight display systems (IPFD). The prototype IPFD displays consist of primary flight symbology overlay with flight path information and flight director guidance cues on the SV external 3D background scenes. The IPFD conformal perspective-view background displays include terrain and obstacle scenes generated with Honeywell's enhanced ground proximity warning system (EGPWS) databases, runway displays generated with commercial FMS databases, and 3D flight plan information coming directly from on-board FMS systems. The flight plan display concepts include 3D waypoint representations with altitude constraints, terrain tracing curves and vectors based on airframe performances, and required navigation performance (RNP) data. The considerations for providing flight crews with intuitive views of complex approach procedures with minimal display clutter are discussed. The flight test data on-board Honeywell Citation Sovereign aircraft and pilot feedback are summarized with the emphasis on the test results involving approaches into terrainchallenged air fields with complex FMS approach procedures.

  14. Industrial robot's vision systems

    Science.gov (United States)

    Iureva, Radda A.; Raskin, Evgeni O.; Komarov, Igor I.; Maltseva, Nadezhda K.; Fedosovsky, Michael E.

    2016-03-01

    Due to the improved economic situation in the high technology sectors, work on the creation of industrial robots and special mobile robotic systems are resumed. Despite this, the robotic control systems mostly remained unchanged. Hence one can see all advantages and disadvantages of these systems. This is due to lack of funds, which could greatly facilitate the work of the operator, and in some cases, completely replace it. The paper is concerned with the complex machine vision of robotic system for monitoring of underground pipelines, which collects and analyzes up to 90% of the necessary information. Vision Systems are used to identify obstacles to the process of movement on a trajectory to determine their origin, dimensions and character. The object is illuminated in a structured light, TV camera records projected structure. Distortions of the structure uniquely determine the shape of the object in view of the camera. The reference illumination is synchronized with the camera. The main parameters of the system are the basic distance between the generator and the lights and the camera parallax angle (the angle between the optical axes of the projection unit and camera).

  15. Coherent laser vision system

    International Nuclear Information System (INIS)

    Sebastion, R.L.

    1995-01-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system

  16. INVIS : Integrated night vision surveillance and observation system

    NARCIS (Netherlands)

    Toet, A.; Hogervorst, M.A.; Dijk, J.; Son, R. van

    2010-01-01

    We present the design and first field trial results of the all-day all-weather INVIS Integrated Night Vision surveillance and observation System. The INVIS augments a dynamic three-band false-color nightvision image with synthetic 3D imagery in a real-time display. The night vision sensor suite

  17. Real-time vision systems

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  18. Dynamical Systems and Motion Vision.

    Science.gov (United States)

    1988-04-01

    TASK Artificial Inteligence Laboratory AREA I WORK UNIT NUMBERS 545 Technology Square . Cambridge, MA 02139 C\\ II. CONTROLLING OFFICE NAME ANO0 ADDRESS...INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY A.I.Memo No. 1037 April, 1988 Dynamical Systems and Motion Vision Joachim Heel Abstract: In this... Artificial Intelligence L3 Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory’s [1 Artificial Intelligence Research is

  19. General Aviation Flight Test of Advanced Operations Enabled by Synthetic Vision

    Science.gov (United States)

    Glaab, Louis J.; Hughhes, Monica F.; Parrish, Russell V.; Takallu, Mohammad A.

    2014-01-01

    A flight test was performed to compare the use of three advanced primary flight and navigation display concepts to a baseline, round-dial concept to assess the potential for advanced operations. The displays were evaluated during visual and instrument approach procedures including an advanced instrument approach resembling a visual airport traffic pattern. Nineteen pilots from three pilot groups, reflecting the diverse piloting skills of the General Aviation pilot population, served as evaluation subjects. The experiment had two thrusts: 1) an examination of the capabilities of low-time (i.e., <400 hours), non-instrument-rated pilots to perform nominal instrument approaches, and 2) an exploration of potential advanced Visual Meteorological Conditions (VMC)-like approaches in Instrument Meteorological Conditions (IMC). Within this context, advanced display concepts are considered to include integrated navigation and primary flight displays with either aircraft attitude flight directors or Highway In The Sky (HITS) guidance with and without a synthetic depiction of the external visuals (i.e., synthetic vision). Relative to the first thrust, the results indicate that using an advanced display concept, as tested herein, low-time, non-instrument-rated pilots can exhibit flight-technical performance, subjective workload and situation awareness ratings as good as or better than high-time Instrument Flight Rules (IFR)-rated pilots using Baseline Round Dials for a nominal IMC approach. For the second thrust, the results indicate advanced VMC-like approaches are feasible in IMC, for all pilot groups tested for only the Synthetic Vision System (SVS) advanced display concept.

  20. Computer vision in control systems

    CERN Document Server

    Jain, Lakhmi

    2015-01-01

    Volume 1 : This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: ·         Morphological Image Analysis for Computer Vision Applications. ·         Methods for Detecting of Structural Changes in Computer Vision Systems. ·         Hierarchical Adaptive KL-based Transform: Algorithms and Applications. ·         Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. ·         A Way of Energy Analysis for Image and Video Sequence Processing. ·         Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. ·         Scene Analysis Using Morphological Mathematics and Fuzzy Logic. ·         Digital Video Stabilization in Static and Dynamic Scenes. ·         Implementation of Hadamard Matrices for Image Processing. ·         A Generalized Criterion ...

  1. Basic design principles of colorimetric vision systems

    Science.gov (United States)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  2. VISION 21 SYSTEMS ANALYSIS METHODOLOGIES

    Energy Technology Data Exchange (ETDEWEB)

    G.S. Samuelsen; A. Rao; F. Robson; B. Washom

    2003-08-11

    Under the sponsorship of the U.S. Department of Energy/National Energy Technology Laboratory, a multi-disciplinary team led by the Advanced Power and Energy Program of the University of California at Irvine is defining the system engineering issues associated with the integration of key components and subsystems into power plant systems that meet performance and emission goals of the Vision 21 program. The study efforts have narrowed down the myriad of fuel processing, power generation, and emission control technologies to selected scenarios that identify those combinations having the potential to achieve the Vision 21 program goals of high efficiency and minimized environmental impact while using fossil fuels. The technology levels considered are based on projected technical and manufacturing advances being made in industry and on advances identified in current and future government supported research. Included in these advanced systems are solid oxide fuel cells and advanced cycle gas turbines. The results of this investigation will serve as a guide for the U. S. Department of Energy in identifying the research areas and technologies that warrant further support.

  3. Evaluation of Synthetic Vision Display Concepts for Improved Awareness in Unusual Attitude Recovery Scenarios

    Science.gov (United States)

    Nicholas, Stephanie

    2016-01-01

    A recent study conducted by the Commercial Aviation Safety Team (CAST) determined 40 percent of all fixed-wing fatal accidents, between 2001 and 2011, were caused by Loss-of-Control (LOC) in flight (National Transportation Safety Board, 2015). Based on their findings, CAST recommended manufacturers develop and implement virtual day-visual meteorological conditions (VMC) display systems, such as synthetic vision or equivalent systems (CAST, 2016). In a 2015 simulation study conducted at NASA Langley Research Center (LaRC), researchers gathered to test and evaluate virtual day-VMC displays under realistic flight operation scenarios capable of inducing reduced attention states in pilots. Each display concept was evaluated to determine its efficacy to improve attitude awareness. During the experiment, Evaluation Pilots (EPs) were shown the following three display concepts on the Primary Flight Display (PFD): Baseline, Synthetic Vision (SV) with color gradient, and SV with texture. The baseline configuration was a standard, conventional 'blue over brown' display. Experiment scenarios were simulated over water to evaluate Unusual Attitude (UA) recovery over 'featureless terrain' environments. Thus, the SV with color gradient configuration presented a 'blue over blue' display with a linear blue color progression, to differentiate attitude changes between sky and ocean. The SV with texture configuration presented a 'blue over blue' display with a black checkerboard texture atop a synthetic ocean. These displays were paired with a Background Attitude Indicator (BAI) concept. The BAI was presented across all four Head-Down Displays (HDDs), displaying a wide field-of-view blue-over-blue attitude indicator. The BAI aligned with the PFD and showed through the background of the navigation displays with opaque transparency. Each EP participated in a two-part experiment series with a total seventy-five trial runs: Part I included a set of twenty-five Unusual Attitude Recovery (UAR

  4. Vision based systems for UAV applications

    CERN Document Server

    Kuś, Zygmunt

    2013-01-01

    This monograph is motivated by a significant number of vision based algorithms for Unmanned Aerial Vehicles (UAV) that were developed during research and development projects. Vision information is utilized in various applications like visual surveillance, aim systems, recognition systems, collision-avoidance systems and navigation. This book presents practical applications, examples and recent challenges in these mentioned application fields. The aim of the book is to create a valuable source of information for researchers and constructors of solutions utilizing vision from UAV. Scientists, researchers and graduate students involved in computer vision, image processing, data fusion, control algorithms, mechanics, data mining, navigation and IC can find many valuable, useful and practical suggestions and solutions. The latest challenges for vision based systems are also presented.

  5. COHERENT LASER VISION SYSTEM (CLVS) OPTION PHASE

    Energy Technology Data Exchange (ETDEWEB)

    Robert Clark

    1999-11-18

    The purpose of this research project was to develop a prototype fiber-optic based Coherent Laser Vision System (CLVS) suitable for DOE's EM Robotic program. The system provides three-dimensional (3D) vision for monitoring situations in which it is necessary to update the dimensional spatial data on the order of once per second. The system has total immunity to ambient lighting conditions.

  6. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    Science.gov (United States)

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  7. Synthetic sustained gene delivery systems.

    Science.gov (United States)

    Agarwal, Ankit; Mallapragada, Surya K

    2008-01-01

    Gene therapy today is hampered by the need of a safe and efficient gene delivery system that can provide a sustained therapeutic effect without cytotoxicity or unwanted immune responses. Bolus gene delivery in solution results in the loss of delivered factors via lymphatic system and may cause undesired effects by the escape of bioactive molecules to distant sites. Controlled gene delivery systems, acting as localized depot of genes, provide an extended sustained release of genes, giving prolonged maintenance of the therapeutic level of encoded proteins. They also limit the DNA degradation in the nuclease rich extra-cellular environment. While attempts have been made to adapt existing controlled drug delivery technologies, more novel approaches are being investigated for controlled gene delivery. DNA encapsulated in nano/micro spheres of polymers have been administered systemically/orally to be taken up by the targeted tissues and provide sustained release once internalized. Alternatively, DNA entrapped in hydrogels or scaffolds have been injected/implanted in tissues/cavities as platforms for gene delivery. The present review examines these different modalities for sustained delivery of viral and non-viral gene-delivery vectors. Design parameters and release mechanisms of different systems made with synthetic or natural polymers are presented along with their prospective applications and opportunities for continuous development.

  8. Vision systems for scientific and engineering applications

    International Nuclear Information System (INIS)

    Chadda, V.K.

    2009-01-01

    Human performance can get degraded due to boredom, distraction and fatigue in vision-related tasks such as measurement, counting etc. Vision based techniques are increasingly being employed in many scientific and engineering applications. Notable advances in this field are emerging from continuing improvements in the fields of sensors and related technologies, and advances in computer hardware and software. Automation utilizing vision-based systems can perform repetitive tasks faster and more accurately, with greater consistency over time than humans. Electronics and Instrumentation Services Division has developed vision-based systems for several applications to perform tasks such as precision alignment, biometric access control, measurement, counting etc. This paper describes in brief four such applications. (author)

  9. Robot vision system for remote plutonium disposition

    International Nuclear Information System (INIS)

    Kriikku, E.

    2000-01-01

    Tons of weapons-usable plutonium has been declared surplus to the national security needs of the United States. The Plutonium Immobilization Program (PIP) is a US Department of Energy sponsored program to place excess plutonium in a stable form and make it unattractive for reuse. A vision system was developed as part of PIP robotic and remote systems development. This vision system provides visual feedback to a can-loading robot that places plutonium/ceramic pucks in stainless steel cans. Inexpensive grayscale CCD cameras were used in conjunction with an off-the-shelf video capture card and computer to build an effective two-camera vision system. Testing demonstrates the viability of this technology for use in the Plutonium Immobilization Project facility, which is scheduled to begin operations in 2008

  10. Philosophy of Systems and Synthetic Biology

    DEFF Research Database (Denmark)

    Green, Sara

    2017-01-01

    This entry aims to clarify how systems and synthetic biology contribute to and extend discussions within philosophy of science. Unlike fields such as developmental biology or molecular biology, systems and synthetic biology are not easily demarcated by a focus on a specific subject area or level...... computational approaches, about the relation between living and artificial systems, and about the implications of interdisciplinary research for science and society. The entry can be openly accessed at the webpage of the Stanford Encyclopaedia of Philosophy: https://plato.stanford.edu/entries/systems-synthetic-biology/...... of organization. Rather, they are characterized by the development and application of mathematical, computational, and synthetic modeling strategies in response to complex problems and challenges within the life sciences. Proponents of systems and synthetic biology often stress the necessity of a perspective...

  11. Geo synthetic-reinforced Pavement systems

    International Nuclear Information System (INIS)

    Zornberg, J. G.

    2014-01-01

    Geo synthetics have been used as reinforcement inclusions to improve pavement performance. while there are clear field evidence of the benefit of using geo synthetic reinforcements, the specific conditions or mechanisms that govern the reinforcement of pavements are, at best, unclear and have remained largely unmeasured. Significant research has been recently conducted with the objectives of: (i) determining the relevant properties of geo synthetics that contribute to the enhanced performance of pavement systems, (ii) developing appropriate analytical, laboratory and field methods capable of quantifying the pavement performance, and (iii) enabling the prediction of pavement performance as a function of the properties of the various types of geo synthetics. (Author)

  12. The Effects of Synthetic and Enhanced Vision Technologies for Lunar Landings

    Science.gov (United States)

    Kramer, Lynda J.; Norman, Robert M.; Prinzel, Lawrence J., III; Bailey, Randall E.; Arthur, Jarvis J., III; Shelton, Kevin J.; Williams, Steven P.

    2009-01-01

    Eight pilots participated as test subjects in a fixed-based simulation experiment to evaluate advanced vision display technologies such as Enhanced Vision (EV) and Synthetic Vision (SV) for providing terrain imagery on flight displays in a Lunar Lander Vehicle. Subjects were asked to fly 20 approaches to the Apollo 15 lunar landing site with four different display concepts - Baseline (symbology only with no terrain imagery), EV only (terrain imagery from Forward Looking Infra Red, or FLIR, and LIght Detection and Ranging, or LIDAR, sensors), SV only (terrain imagery from onboard database), and Fused EV and SV concepts. As expected, manual landing performance was excellent (within a meter of landing site center) and not affected by the inclusion of EV or SV terrain imagery on the Lunar Lander flight displays. Subjective ratings revealed significant situation awareness improvements with the concepts employing EV and/or SV terrain imagery compared to the Baseline condition that had no terrain imagery. In addition, display concepts employing EV imagery (compared to the SV and Baseline concepts which had none) were significantly better for pilot detection of intentional but unannounced navigation failures since this imagery provided an intuitive and obvious visual methodology to monitor the validity of the navigation solution.

  13. AN INVESTIGATION OF VISION PROBLEMS AND THE VISION CARE SYSTEM IN RURAL CHINA.

    Science.gov (United States)

    Bai, Yunli; Yi, Hongmei; Zhang, Linxiu; Shi, Yaojiang; Ma, Xiaochen; Congdon, Nathan; Zhou, Zhongqiang; Boswell, Matthew; Rozelle, Scott

    2014-11-01

    This paper examines the prevalence of vision problems and the accessibility to and quality of vision care in rural China. We obtained data from 4 sources: 1) the National Rural Vision Care Survey; 2) the Private Optometrists Survey; 3) the County Hospital Eye Care Survey; and 4) the Rural School Vision Care Survey. The data from each of the surveys were collected by the authors during 2012. Thirty-three percent of the rural population surveyed self-reported vision problems. Twenty-two percent of subjects surveyed had ever had a vision exam. Among those who self-reported having vision problems, 34% did not wear eyeglasses. Fifty-four percent of those with vision problems who had eyeglasses did not have a vision exam prior to receiving glasses. However, having a vision exam did not always guarantee access to quality vision care. Four channels of vision care service were assessed. The school vision examination program did not increase the usage rate of eyeglasses. Each county-hospital was staffed with three eye-doctors having one year of education beyond high school, serving more than 400,000 residents. Private optometrists often had low levels of education and professional certification. In conclusion, our findings shows that the vision care system in rural China is inadequate and ineffective in meeting the needs of the rural population sampled.

  14. Lumber Grading With A Computer Vision System

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Philip A. Araman

    1989-01-01

    Over the past few years significant progress has been made in developing a computer vision system for locating and identifying defects on surfaced hardwood lumber. Unfortunately, until September of 1988 little research had gone into developing methods for analyzing rough lumber. This task is arguably more complex than the analysis of surfaced lumber. The prime...

  15. Vision based flight procedure stereo display system

    Science.gov (United States)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  16. Missileborne artificial vision system (MAVIS)

    Science.gov (United States)

    Andes, David K.; Witham, James C.; Miles, Michael D.

    1994-03-01

    The Naval Air Warfare Center, China Lake has developed a real time, hardware and software system designed to implement and evaluate biologically inspired retinal and cortical models. The hardware is based on the Adaptive Solutions Inc. massively parallel CNAPS system COHO boards. Each COHO board is a standard size 6U VME card featuring 256 fixed point, RISC processors running at 20 MHz in a SIMD configuration. Each COHO board has a Companion board built to support a real time VSB interface to an imaging seeker, a NTSC camera and to other COHO boards. The system is designed to have multiple SIMD machines each performing different Corticomorphic functions. The system level software has been developed which allows a high level description of Corticomorphic structures to be translated into the native microcode of the CNAPS chips. Corticomorphic structures are those neural structures with a form similar to that of the retina, the lateral geniculate nucleus or the visual cortex. This real time hardware system is designed to be shrunk into a volume compatible with air launched tactical missiles. Initial versions of the software and hardware have been completed and are in the early stages of integration with a missile seeker.

  17. Missileborne Artificial Vision System (MAVIS)

    Science.gov (United States)

    Andes, David K.; Witham, James C.; Miles, Michael D.

    1994-01-01

    Several years ago when INTEL and China Lake designed the ETANN chip, analog VLSI appeared to be the only way to do high density neural computing. In the last five years, however, digital parallel processing chips capable of performing neural computation functions have evolved to the point of rough equality with analog chips in system level computational density. The Naval Air Warfare Center, China Lake, has developed a real time, hardware and software system designed to implement and evaluate biologically inspired retinal and cortical models. The hardware is based on the Adaptive Solutions Inc. massively parallel CNAPS system COHO boards. Each COHO board is a standard size 6U VME card featuring 256 fixed point, RISC processors running at 20 MHz in a SIMD configuration. Each COHO board has a companion board built to support a real time VSB interface to an imaging seeker, a NTSC camera, and to other COHO boards. The system is designed to have multiple SIMD machines each performing different corticomorphic functions. The system level software has been developed which allows a high level description of corticomorphic structures to be translated into the native microcode of the CNAPS chips. Corticomorphic structures are those neural structures with a form similar to that of the retina, the lateral geniculate nucleus, or the visual cortex. This real time hardware system is designed to be shrunk into a volume compatible with air launched tactical missiles. Initial versions of the software and hardware have been completed and are in the early stages of integration with a missile seeker.

  18. Vision enhanced navigation for unmanned systems

    Science.gov (United States)

    Wampler, Brandon Loy

    A vision based simultaneous localization and mapping (SLAM) algorithm is evaluated for use on unmanned systems. SLAM is a technique used by a vehicle to build a map of an environment while concurrently keeping track of its location within the map, without a priori knowledge. The work in this thesis is focused on using SLAM as a navigation solution when global positioning system (GPS) service is degraded or temporarily unavailable. Previous work on unmanned systems that lead up to the determination that a better navigation solution than GPS alone is first presented. This previous work includes control of unmanned systems, simulation, and unmanned vehicle hardware testing. The proposed SLAM algorithm follows the work originally developed by Davidson et al. in which they dub their algorithm MonoSLAM [1--4]. A new approach using the Pyramidal Lucas-Kanade feature tracking algorithm from Intel's OpenCV (open computer vision) library is presented as a means of keeping correct landmark correspondences as the vehicle moves through the scene. Though this landmark tracking method is unusable for long term SLAM due to its inability to recognize revisited landmarks, as opposed to the Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), its computational efficiency makes it a good candidate for short term navigation between GPS position updates. Additional sensor information is then considered by fusing INS and GPS information into the SLAM filter. The SLAM system, in its vision only and vision/IMU form, is tested on a table top, in an open room, and finally in an outdoor environment. For the outdoor environment, a form of the slam algorithm that fuses vision, IMU, and GPS information is tested. The proposed SLAM algorithm, and its several forms, are implemented in C++ using an Extended Kalman Filter (EKF). Experiments utilizing a live video feed from a webcam are performed. The different forms of the filter are compared and conclusions are made on

  19. Exploring the potential of energy and terrain awareness information in a Synthetic Vision display for UAV control

    NARCIS (Netherlands)

    Tadema, J.; Theunissen, E.; Lambregts, T.

    2009-01-01

    This paper addresses the design and implementation of a conceptual Enhanced/Synthetic Vision Primary Flight Display format. The goal of this work is to explore the means to provide the operator of a UAV with an integrated view of the constraints for the velocity vector, resulting in an explicit

  20. Egg weight detection on machine vision system

    Science.gov (United States)

    Cen, Yike; Ying, Yibin; Rao, Xiuqin

    2006-10-01

    A machine vision system for egg weight detection was developed. Egg image was grabbed by a CCD camera and a frame grabber. An indicator composed of R, G, B intensity was used for image segmentation. A series of algorithms were developed to evaluate egg's vertical diameter, maximal horizontal diameter, upper horizontal diameter and nether horizontal diameter. Based on extracted four size features of vertical and maximal/upper/nether horizontal diameter, a regression model between egg's weight and its size was established using SAS, which was used to detect egg's weight. The experiment results indicated that, for egg weight detection on the machine vision system, the correlative coefficient of the regression model was 0.9781 and the absolute error was no more than +/-3 g, which would be lower work load on human graders and an increased flexibility in the egg quality control process in egg's industrialization.

  1. Tunable promoters in synthetic and systems biology

    DEFF Research Database (Denmark)

    Dehli, Tore; Solem, Christian; Jensen, Peter Ruhdal

    2012-01-01

    in synthetic biology. A number of tools exist to manipulate the steps in between gene sequence and functional protein in living cells, but out of these the most straight-forward approach is to alter the gene expression level by manipulating the promoter sequence. Some of the promoter tuning tools available......Synthetic and systems biologists need standardized, modular and orthogonal tools yielding predictable functions in vivo. In systems biology such tools are needed to quantitatively analyze the behavior of biological systems while the efficient engineering of artificial gene networks is central...

  2. An Event-Driven Classifier for Spiking Neural Networks Fed with Synthetic or Dynamic Vision Sensor Data

    Directory of Open Access Journals (Sweden)

    Evangelos Stromatias

    2017-06-01

    Full Text Available This paper introduces a novel methodology for training an event-driven classifier within a Spiking Neural Network (SNN System capable of yielding good classification results when using both synthetic input data and real data captured from Dynamic Vision Sensor (DVS chips. The proposed supervised method uses the spiking activity provided by an arbitrary topology of prior SNN layers to build histograms and train the classifier in the frame domain using the stochastic gradient descent algorithm. In addition, this approach can cope with leaky integrate-and-fire neuron models within the SNN, a desirable feature for real-world SNN applications, where neural activation must fade away after some time in the absence of inputs. Consequently, this way of building histograms captures the dynamics of spikes immediately before the classifier. We tested our method on the MNIST data set using different synthetic encodings and real DVS sensory data sets such as N-MNIST, MNIST-DVS, and Poker-DVS using the same network topology and feature maps. We demonstrate the effectiveness of our approach by achieving the highest classification accuracy reported on the N-MNIST (97.77% and Poker-DVS (100% real DVS data sets to date with a spiking convolutional network. Moreover, by using the proposed method we were able to retrain the output layer of a previously reported spiking neural network and increase its performance by 2%, suggesting that the proposed classifier can be used as the output layer in works where features are extracted using unsupervised spike-based learning methods. In addition, we also analyze SNN performance figures such as total event activity and network latencies, which are relevant for eventual hardware implementations. In summary, the paper aggregates unsupervised-trained SNNs with a supervised-trained SNN classifier, combining and applying them to heterogeneous sets of benchmarks, both synthetic and from real DVS chips.

  3. Multi-channel automotive night vision system

    Science.gov (United States)

    Lu, Gang; Wang, Li-jun; Zhang, Yi

    2013-09-01

    A four-channel automotive night vision system is designed and developed .It is consist of the four active near-infrared cameras and an Mulit-channel image processing display unit,cameras were placed in the automobile front, left, right and rear of the system .The system uses near-infrared laser light source,the laser light beam is collimated, the light source contains a thermoelectric cooler (TEC),It can be synchronized with the camera focusing, also has an automatic light intensity adjustment, and thus can ensure the image quality. The principle of composition of the system is description in detail,on this basis, beam collimation,the LD driving and LD temperature control of near-infrared laser light source,four-channel image processing display are discussed.The system can be used in driver assistance, car BLIS, car parking assist system and car alarm system in day and night.

  4. Synthetic observations of protostellar multiple systems

    Science.gov (United States)

    Lomax, O.; Whitworth, A. P.

    2018-04-01

    Observations of protostars are often compared with synthetic observations of models in order to infer the underlying physical properties of the protostars. The majority of these models have a single protostar, attended by a disc and an envelope. However, observational and numerical evidence suggests that a large fraction of protostars form as multiple systems. This means that fitting models of single protostars to observations may be inappropriate. We produce synthetic observations of protostellar multiple systems undergoing realistic, non-continuous accretion. These systems consist of multiple protostars with episodic luminosities, embedded self-consistently in discs and envelopes. We model the gas dynamics of these systems using smoothed particle hydrodynamics and we generate synthetic observations by post-processing the snapshots using the SPAMCART Monte Carlo radiative transfer code. We present simulation results of three model protostellar multiple systems. For each of these, we generate 4 × 104 synthetic spectra at different points in time and from different viewing angles. We propose a Bayesian method, using similar calculations to those presented here, but in greater numbers, to infer the physical properties of protostellar multiple systems from observations.

  5. Using Vision System Technologies for Offset Approaches in Low Visibility Operations

    Science.gov (United States)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K.

    2015-01-01

    Flight deck-based vision systems, such as Synthetic Vision Systems (SVS) and Enhanced Flight Vision Systems (EFVS), have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in Next Generation Air Transportation System low visibility approach and landing operations at Chicago O'Hare airport. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and three instrument approach types (straight-in, 3-degree offset, 15-degree offset) were experimentally varied to test the efficacy of the SVS/EFVS HUD concepts for offset approach operations. The findings suggest making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD appear feasible. Regardless of offset approach angle or HUD concept being flown, all approaches had comparable ILS tracking during the instrument segment and were within the lateral confines of the runway with acceptable sink rates during the visual segment of the approach. Keywords: Enhanced Flight Vision Systems; Synthetic Vision Systems; Head-up Display; NextGen

  6. Adaptive LIDAR Vision System for Advanced Robotics, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced robotic systems demand an enhanced vision system and image processing algorithms to reduce the percentage of manual operation required. Unstructured...

  7. Tunable promoters in synthetic and systems biology

    DEFF Research Database (Denmark)

    Dehli, Tore; Solem, Christian; Jensen, Peter Ruhdal

    2012-01-01

    for accomplishing such altered gene expression levels are discussed here along with examples of their use, and ideas for new tools are described. The road ahead looks very promising for synthetic and systems biologists as tools to achieve just about anything in terms of tuning and timing multiple gene expression...

  8. Knowledge-based machine vision systems for space station automation

    Science.gov (United States)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  9. DLP™-based dichoptic vision test system

    Science.gov (United States)

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3% remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer's sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events.

  10. 75 FR 71146 - In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing...

    Science.gov (United States)

    2010-11-22

    ... COMMISSION In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing..., and the sale within the United States after importation of certain machine vision software, machine vision systems, or products containing same by reason of infringement of certain claims of U.S. Patent...

  11. 75 FR 60478 - In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing...

    Science.gov (United States)

    2010-09-30

    ... COMMISSION In the Matter of Certain Machine Vision Software, Machine Vision Systems, and Products Containing... importation of certain machine vision software, machine vision systems, or products containing same by reason... Soft'') of Japan; Fuji Machine Manufacturing Co., Ltd. of Japan and Fuji America Corporation of Vernon...

  12. Embedded Active Vision System Based on an FPGA Architecture

    Directory of Open Access Journals (Sweden)

    Pierre Chalimbaud

    2006-12-01

    Full Text Available In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks, inspired by biological vision systems. For this reason, we propose an original approach based on a system on programmable chip implemented in an FPGA connected to a CMOS imager and an inertial set. With such a structure based on reprogrammable devices, this system admits a high degree of versatility and allows the implementation of parallel image processing algorithms.

  13. Embedded Active Vision System Based on an FPGA Architecture

    Directory of Open Access Journals (Sweden)

    Chalimbaud Pierre

    2007-01-01

    Full Text Available In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks, inspired by biological vision systems. For this reason, we propose an original approach based on a system on programmable chip implemented in an FPGA connected to a CMOS imager and an inertial set. With such a structure based on reprogrammable devices, this system admits a high degree of versatility and allows the implementation of parallel image processing algorithms.

  14. Miniature synthetic-aperture radar system

    Science.gov (United States)

    Stockton, Wayne; Stromfors, Richard D.

    1990-11-01

    Loral Defense Systems-Arizona has developed a high-performance synthetic-aperture radar (SAR) for small aircraft and unmanned aerial vehicle (UAV) reconnaissance applications. This miniature radar, called Miniature Synthetic-Aperture Radar (MSAR), is packaged in a small volume and has low weight. It retains key features of large SAR systems, including high-resolution imaging and all-weather operation. The operating frequency of MSAR can optionally be selected to provide foliage penetration capability. Many imaging radar configurations can be derived using this baseline system. MSAR with a data link provides an attractive UAV sensor. MSAR with a real-time image formation processor is well suited to installations where onboard processing and immediate image analysis are required. The MSAR system provides high-resolution imaging for short-to-medium range reconnaissance applications.

  15. Vision system in quality control automation

    Directory of Open Access Journals (Sweden)

    Kiran Ravi

    2018-01-01

    Full Text Available Measurement of surface roughness is one of the quality control processes, usually carried out off line. Contact type surface roughness measurement method is commonly used in quality control. The processes consume lot of time with human interaction. In order to reduce or to eliminate non value added time, effective quality inspection tool and automation of the processes has to be utilized. An attempt has been made to automate the process with integration of vision camera in capturing the image of the component surface. The image process technique has the advantage of analyzing the single captured image for multiple area measurement. Hence, the in-line quality control of each component surface roughness measurement is ensured. The automation process involves component movement, image capturing, image processing, and decision making, using sensors, actuators and microcontroller. The proposed in-line quality control of surface roughness with vision system has been successfully developed. The designed automated system has fulfilled the objectives in respect of the scope of the present work.

  16. The system architecture for renewable synthetic fuels

    DEFF Research Database (Denmark)

    Ridjan, Iva

    To overcome and eventually eliminate the existing heavy fossil fuels in the transport sector, there is a need for new renewable fuels. This transition could lead to large capital costs for implementing the new solutions and a long time frame for establishing the new infrastructure unless a suitable...... infrastructure is present. The system integration of synthetic fuels will therefore depend on the existing infrastructure and the possibility of continuing its exploitation to minimize the costs and maximize the use of the current infrastructure in place. The production process includes different steps...... and production plants, so it is important to implement it in the best manner possible to ensure an efficient and flexible system. The poster will provide an overview of the steps involved in the production of synthetic fuel and possible solutions for the system architecture based on the current literature...

  17. Tools for designing industrial vision systems

    Science.gov (United States)

    Batchelor, Bruce G.

    1991-09-01

    The cost of commissioning and installing a machine vision system is almost always dominated by that of designing it. Indeed, the cost of design and the shortage of skilled vision systems engineers are together likely to be two of the most important factors limiting the future adoption of this technology by manufacturing industry. The article describes several software tools that have been developed for making the design process easier, cheaper and faster. These include: (a) An extension of Prolog, called Prolog+. This is intended for prototyping intelligent image processing, as well as for programming future target systems. (b) A knowledge-based program intended to assist an engineer to select a suitable lighting and image acquisition sub-system. This called a Lighting Advisor. (c) A knowledge-based program which advises an engineer on how to select a suitable lens. This called a Lens Advisor. (d) A knowledge-based program which assists an engineer to choose a suitable camera. This called a Camera Advisor. Ideally, items (b) to (d) should be integrated with Prolog+, so that a programmer has access to all of them in one unified working environment. Prolog+ is able to accept simple natural language descriptions (i.e., in a simple sub-set of English) of the objects/scenes that are to be inspected and is able to generate a recognition program automatically. A range of inspection tasks is described, in which Automated Visual Inspection has, to date, made no real impact. Amongst these is the inspection of products that are made in very small quantities. An electro-mechanical arrangement, called a Flexible Inspection Cell, is described. This is intended to provide a "general purpose" inspection facility for small-batch artifacts. Such a cell is controlled using Prolog+.

  18. Application of binocular vision system in nuclear power plant

    International Nuclear Information System (INIS)

    Chen Yulong; He Xuhong; Zhao Bingquan

    2002-01-01

    Based on stereo disparity, a vision system of locating three-dimensional position is described. The input device of the vision system is a digital camera. And special targets are used to improve the efficiency and accuracy of computer analysis. It provides a reliable and practical computer locating system for equipment maintenance in nuclear power plant

  19. A Machine Vision System for Automatically Grading Hardwood Lumber - (Proceedings)

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas H. Drayer; Joe G. Tront; Philip A. Araman; Robert L. Brisbon

    1990-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  20. Hybrid Collaborative Stereo Vision System for Mobile Robots Formation

    OpenAIRE

    Flavio Roberti; Juan Marcos Toibero; Carlos Soria; Raquel Frizera Vassallo; Ricardo Carelli

    2009-01-01

    This paper presents the use of a hybrid collaborative stereo vision system (3D-distributed visual sensing using different kinds of vision cameras) for the autonomous navigation of a wheeled robot team. It is proposed a triangulation-based method for the 3D-posture computation of an unknown object by considering the collaborative hybrid stereo vision system, and this way to steer the robot team to a desired position relative to such object while maintaining a desired robot formation. Experimen...

  1. Hybrid Collaborative Stereo Vision System for Mobile Robots Formation

    Directory of Open Access Journals (Sweden)

    Flavio Roberti

    2010-02-01

    Full Text Available This paper presents the use of a hybrid collaborative stereo vision system (3D-distributed visual sensing using different kinds of vision cameras for the autonomous navigation of a wheeled robot team. It is proposed a triangulation-based method for the 3D-posture computation of an unknown object by considering the collaborative hybrid stereo vision system, and this way to steer the robot team to a desired position relative to such object while maintaining a desired robot formation. Experimental results with real mobile robots are included to validate the proposed vision system.

  2. Hybrid Collaborative Stereo Vision System for Mobile Robots Formation

    Directory of Open Access Journals (Sweden)

    Flavio Roberti

    2009-12-01

    Full Text Available This paper presents the use of a hybrid collaborative stereo vision system (3D-distributed visual sensing using different kinds of vision cameras for the autonomous navigation of a wheeled robot team. It is proposed a triangulation-based method for the 3D-posture computation of an unknown object by considering the collaborative hybrid stereo vision system, and this way to steer the robot team to a desired position relative to such object while maintaining a desired robot formation. Experimental results with real mobile robots are included to validate the proposed vision system.

  3. Nuclear bimodal new vision solar system missions

    International Nuclear Information System (INIS)

    Mondt, J.F.; Zubrin, R.M.

    1996-01-01

    This paper presents an analysis of the potential mission capability using space reactor bimodal systems for planetary missions. Missions of interest include the Main belt asteroids, Jupiter, Saturn, Neptune, and Pluto. The space reactor bimodal system, defined by an Air Force study for Earth orbital missions, provides 10 kWe power, 1000 N thrust, 850 s Isp, with a 1500 kg system mass. Trajectories to the planetary destinations were examined and optimal direct and gravity assisted trajectories were selected. A conceptual design for a spacecraft using the space reactor bimodal system for propulsion and power, that is capable of performing the missions of interest, is defined. End-to-end mission conceptual designs for bimodal orbiter missions to Jupiter and Saturn are described. All missions considered use the Delta 3 class or Atlas 2AS launch vehicles. The space reactor bimodal power and propulsion system offers both; new vision open-quote open-quote constellation close-quote close-quote type missions in which the space reactor bimodal spacecraft acts as a carrier and communication spacecraft for a fleet of microspacecraft deployed at different scientific targets and; conventional missions with only a space reactor bimodal spacecraft and its science payload. copyright 1996 American Institute of Physics

  4. Studying, Teaching and Applying Sustainability Visions Using Systems Modeling

    Directory of Open Access Journals (Sweden)

    David M. Iwaniec

    2014-07-01

    Full Text Available The objective of articulating sustainability visions through modeling is to enhance the outcomes and process of visioning in order to successfully move the system toward a desired state. Models emphasize approaches to develop visions that are viable and resilient and are crafted to adhere to sustainability principles. This approach is largely assembled from visioning processes (resulting in descriptions of desirable future states generated from stakeholder values and preferences and participatory modeling processes (resulting in systems-based representations of future states co-produced by experts and stakeholders. Vision modeling is distinct from normative scenarios and backcasting processes in that the structure and function of the future desirable state is explicitly articulated as a systems model. Crafting, representing and evaluating the future desirable state as a systems model in participatory settings is intended to support compliance with sustainability visioning quality criteria (visionary, sustainable, systemic, coherent, plausible, tangible, relevant, nuanced, motivational and shared in order to develop rigorous and operationalizable visions. We provide two empirical examples to demonstrate the incorporation of vision modeling in research practice and education settings. In both settings, vision modeling was used to develop, represent, simulate and evaluate future desirable states. This allowed participants to better identify, explore and scrutinize sustainability solutions.

  5. Intelligent Computer Vision System for Automated Classification

    International Nuclear Information System (INIS)

    Jordanov, Ivan; Georgieva, Antoniya

    2010-01-01

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPτS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  6. Technological process supervising using vision systems cooperating with the LabVIEW vision builder

    Science.gov (United States)

    Hryniewicz, P.; Banaś, W.; Gwiazda, A.; Foit, K.; Sękala, A.; Kost, G.

    2015-11-01

    One of the most important tasks in the production process is to supervise its proper functioning. Lack of required supervision over the production process can lead to incorrect manufacturing of the final element, through the production line downtime and hence to financial losses. The worst result is the damage of the equipment involved in the manufacturing process. Engineers supervise the production flow correctness use the great range of sensors supporting the supervising of a manufacturing element. Vision systems are one of sensors families. In recent years, thanks to the accelerated development of electronics as well as the easier access to electronic products and attractive prices, they become the cheap and universal type of sensors. These sensors detect practically all objects, regardless of their shape or even the state of matter. The only problem is considered with transparent or mirror objects, detected from the wrong angle. Integrating the vision system with the LabVIEW Vision and the LabVIEW Vision Builder it is possible to determine not only at what position is the given element but also to set its reorientation relative to any point in an analyzed space. The paper presents an example of automated inspection. The paper presents an example of automated inspection of the manufacturing process in a production workcell using the vision supervising system. The aim of the work is to elaborate the vision system that could integrate different applications and devices used in different production systems to control the manufacturing process.

  7. Multivariate Analysis Techniques for Optimal Vision System Design

    DEFF Research Database (Denmark)

    Sharifzadeh, Sara

    The present thesis considers optimization of the spectral vision systems used for quality inspection of food items. The relationship between food quality, vision based techniques and spectral signature are described. The vision instruments for food analysis as well as datasets of the food items...... and simplifcation of the design of practical vision systems....... used in this thesis are described. The methodological strategies are outlined including sparse regression and pre-processing based on feature selection and extraction methods, supervised versus unsupervised analysis and linear versus non-linear approaches. One supervised feature selection algorithm...

  8. Vision Systems with the Human in the Loop

    Directory of Open Access Journals (Sweden)

    Bauckhage Christian

    2005-01-01

    Full Text Available The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  9. 3-D Signal Processing in a Computer Vision System

    Science.gov (United States)

    Dongping Zhu; Richard W. Conners; Philip A. Araman

    1991-01-01

    This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...

  10. Machine Vision Systems for Processing Hardwood Lumber and Logs

    Science.gov (United States)

    Philip A. Araman; Daniel L. Schmoldt; Tai-Hoon Cho; Dongping Zhu; Richard W. Conners; D. Earl Kline

    1992-01-01

    Machine vision and automated processing systems are under development at Virginia Tech University with support and cooperation from the USDA Forest Service. Our goals are to help U.S. hardwood producers automate, reduce costs, increase product volume and value recovery, and market higher value, more accurately graded and described products. Any vision system is...

  11. Early Cognitive Vision as a Frontend for Cognitive Systems

    DEFF Research Database (Denmark)

    Krüger, Norbert; Pugeault, Nicolas; Baseski, Emre

    We discuss the need of an elaborated in-between stage bridging early vision and cognitive vision which we call `Early Cognitive Vision' (ECV). This stage provides semantically rich, disambiguated and largely task independent scene representations which can be used in many contexts. In addition......, the ECV stage is important for generalization processes across objects and actions.We exemplify this at a concrete realisation of an ECV system that has already been used in variety of application domains....

  12. Physics Based Vision Systems for Robotic Manipulation

    Data.gov (United States)

    National Aeronautics and Space Administration — With the increase of robotic manipulation tasks (TA4.3), specifically dexterous manipulation tasks (TA4.3.2), more advanced computer vision algorithms will be...

  13. Mammalian Synthetic Biology: Engineering Biological Systems.

    Science.gov (United States)

    Black, Joshua B; Perez-Pinera, Pablo; Gersbach, Charles A

    2017-06-21

    The programming of new functions into mammalian cells has tremendous application in research and medicine. Continued improvements in the capacity to sequence and synthesize DNA have rapidly increased our understanding of mechanisms of gene function and regulation on a genome-wide scale and have expanded the set of genetic components available for programming cell biology. The invention of new research tools, including targetable DNA-binding systems such as CRISPR/Cas9 and sensor-actuator devices that can recognize and respond to diverse chemical, mechanical, and optical inputs, has enabled precise control of complex cellular behaviors at unprecedented spatial and temporal resolution. These tools have been critical for the expansion of synthetic biology techniques from prokaryotic and lower eukaryotic hosts to mammalian systems. Recent progress in the development of genome and epigenome editing tools and in the engineering of designer cells with programmable genetic circuits is expanding approaches to prevent, diagnose, and treat disease and to establish personalized theranostic strategies for next-generation medicines. This review summarizes the development of these enabling technologies and their application to transforming mammalian synthetic biology into a distinct field in research and medicine.

  14. Using Vision System Technologies to Enable Operational Improvements for Low Visibility Approach and Landing Operations

    Science.gov (United States)

    Kramer, Lynda J.; Ellis, Kyle K. E.; Bailey, Randall E.; Williams, Steven P.; Severance, Kurt; Le Vie, Lisa R.; Comstock, James R.

    2014-01-01

    Flight deck-based vision systems, such as Synthetic and Enhanced Vision System (SEVS) technologies, have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. To achieve this potential, research is required for effective technology development and implementation based upon human factors design and regulatory guidance. This research supports the introduction and use of Synthetic Vision Systems and Enhanced Flight Vision Systems (SVS/EFVS) as advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in NextGen low visibility approach and landing operations. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and two color head-down primary flight display (PFD) concepts (conventional PFD, SVS PFD) were evaluated in a simulated NextGen Chicago O'Hare terminal environment. Additionally, the instrument approach type (no offset, 3 degree offset, 15 degree offset) was experimentally varied to test the efficacy of the HUD concepts for offset approach operations. The data showed that touchdown landing performance were excellent regardless of SEVS concept or type of offset instrument approach being flown. Subjective assessments of mental workload and situation awareness indicated that making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD may be feasible.

  15. Airborne Use of Night Vision Systems

    Science.gov (United States)

    Mepham, S.

    1990-04-01

    Mission Management Department of the Royal Aerospace Establishment has won a Queen's Award for Technology, jointly with GEC Sensors, in recognition of innovation and success in the development and application of night vision technology for fixed wing aircraft. This work has been carried out to satisfy the operational needs of the Royal Air Force. These are seen to be: - Operations in the NATO Central Region - To have a night as well as a day capability - To carry out low level, high speed penetration - To attack battlefield targets, especially groups of tanks - To meet these objectives at minimum cost The most effective way to penetrate enemy defences is at low level and survivability would be greatly enhanced with a first pass attack. It is therefore most important that not only must the pilot be able to fly at low level to the target but also he must be able to detect it in sufficient time to complete a successful attack. An analysis of the average operating conditions in Central Europe during winter clearly shows that high speed low level attacks can only be made for about 20 per cent of the 24 hours. Extending this into good night conditions raises the figure to 60 per cent. Whilst it is true that this is for winter conditions and in summer the situation is better, the overall advantage to be gained is clear. If our aircraft do not have this capability the potential for the enemy to advance his troops and armour without hinderance for considerable periods is all too obvious. There are several solutions to providing such a capability. The one chosen for Tornado GR1 is to use Terrain Following Radar (TFR). This system is a complete 24 hour capability. However it has two main disadvantages, it is an active system which means it can be jammed or homed into, and is useful in attacking pre-planned targets. Second it is an expensive system which precludes fitting to other than a small number of aircraft.

  16. Building Artificial Vision Systems with Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    LeCun, Yann [New York University

    2011-02-23

    Three questions pose the next challenge for Artificial Intelligence (AI), robotics, and neuroscience. How do we learn perception (e.g. vision)? How do we learn representations of the perceptual world? How do we learn visual categories from just a few examples?

  17. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    Science.gov (United States)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  18. Grasping Unknown Objects in an Early Cognitive Vision System

    DEFF Research Database (Denmark)

    Popovic, Mila

    2011-01-01

    Grasping of unknown objects presents an important and challenging part of robot manipulation. The growing area of service robotics depends upon the ability of robots to autonomously grasp and manipulate a wide range of objects in everyday environments. Simple, non task-specific grasps of unknown ...... and comparing vision-based grasping methods, and the creation of algorithms for bootstrapping a process of acquiring world understanding for artificial cognitive agents....... presents a system for robotic grasping of unknown objects us- ing stereo vision. Grasps are defined based on contour and surface information provided by the Early Cognitive Vision System, that organizes visual informa- tion into a biologically motivated hierarchical representation. The contributions...... of the thesis are: the extension of the Early Cognitive Vision representation with a new type of feature hierarchy in the texture domain, the definition and evaluation of contour based grasping methods, the definition and evaluation of surface based grasping methods, the definition of a benchmark for testing...

  19. Eye Vision Testing System and Eyewear Using Micromachines

    Directory of Open Access Journals (Sweden)

    Nabeel A. Riza

    2015-11-01

    Full Text Available Proposed is a novel eye vision testing system based on micromachines that uses micro-optic, micromechanic, and microelectronic technologies. The micromachines include a programmable micro-optic lens and aperture control devices, pico-projectors, Radio Frequency (RF, optical wireless communication and control links, and energy harvesting and storage devices with remote wireless energy transfer capabilities. The portable lightweight system can measure eye refractive powers, optimize light conditions for the eye under testing, conduct color-blindness tests, and implement eye strain relief and eye muscle exercises via time sequenced imaging. A basic eye vision test system is built in the laboratory for near-sighted (myopic vision spherical lens refractive error correction. Refractive error corrections from zero up to −5.0 Diopters and −2.0 Diopters are experimentally demonstrated using the Electronic-Lens (E-Lens and aperture control methods, respectively. The proposed portable eye vision test system is suited for children’s eye tests and developing world eye centers where technical expertise may be limited. Design of a novel low-cost human vision corrective eyewear is also presented based on the proposed aperture control concept. Given its simplistic and economical design, significant impact can be created for humans with vision problems in the under-developed world.

  20. Latency in Visionic Systems: Test Methods and Requirements

    Science.gov (United States)

    Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.

    2005-01-01

    A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.

  1. A SYSTEMIC VISION OF BIOLOGY: OVERCOMING LINEARITY

    Directory of Open Access Journals (Sweden)

    M. Mayer

    2005-07-01

    Full Text Available Many  authors have proposed  that contextualization of reality  is necessary  to teach  Biology, empha- sizing students´ social and  economic realities.   However, contextualization means  more than  this;  it is related  to working with  different kinds of phenomena  and/or objects  which enable  the  expression of scientific concepts.  Thus,  contextualization allows the integration of different contents.  Under this perspective,  the  objectives  of this  work were to articulate different  biology concepts  in order  to de- velop a systemic vision of biology; to establish  relationships with other areas of knowledge and to make concrete the  cell molecular  structure and organization as well as their  implications  on living beings´ environment, using  contextualization.  The  methodology  adopted  in this  work  was based  on three aspects:  interdisciplinarity, contextualization and development of competences,  using energy:  its flux and transformations as a thematic axis and  an approach  which allowed the  interconnection between different situations involving  these  concepts.   The  activities developed  were:  1.   dialectic exercise, involving a movement around  micro and macroscopic aspects,  by using questions  and activities,  sup- ported  by the use of alternative material  (as springs, candles on the energy, its forms, transformations and  implications  in the  biological way (microscopic  concepts;  2, Construction of molecular  models, approaching the concepts of atom,  chemical bonds and bond energy in molecules; 3. Observations de- veloped in Manguezal¨(mangrove swamp  ecosystem (Itapissuma, PE  were used to work macroscopic concepts  (as  diversity  and  classification  of plants  and  animals,  concerning  to  energy  flow through food chains and webs. A photograph register of all activities  along the course plus texts

  2. Machine vision systems using machine learning for industrial product inspection

    Science.gov (United States)

    Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony

    2002-02-01

    Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.

  3. Visual Peoplemeter: A Vision-based Television Audience Measurement System

    Directory of Open Access Journals (Sweden)

    SKELIN, A. K.

    2014-11-01

    Full Text Available Visual peoplemeter is a vision-based measurement system that objectively evaluates the attentive behavior for TV audience rating, thus offering solution to some of drawbacks of current manual logging peoplemeters. In this paper, some limitations of current audience measurement system are reviewed and a novel vision-based system aiming at passive metering of viewers is prototyped. The system uses camera mounted on a television as a sensing modality and applies advanced computer vision algorithms to detect and track a person, and to recognize attentional states. Feasibility of the system is evaluated on a secondary dataset. The results show that the proposed system can analyze viewer's attentive behavior, therefore enabling passive estimates of relevant audience measurement categories.

  4. Exploration of a Vision for Actor Database Systems

    DEFF Research Database (Denmark)

    Shah, Vivek

    of the outlined vision, a new programming model named Reactors has been designed to enrich classic relational database programming models with logical actor programming constructs. To support the reactor programming model, a high-performance in-memory multi-core OLTP database system named REACTDB has been built...... of these services. Existing popular approaches to building these services either use an in-memory database system or an actor runtime. We observe that these approaches have complementary strengths and weaknesses. In this dissertation, we propose the integration of actor programming models in database systems....... In doing so, we lay down a vision for a new class of systems called actor database systems. To explore this vision, this dissertation crystallizes the notion of an actor database system by defining its feature set in light of current application and hardware trends. In order to explore the viability...

  5. Machine vision system for online wholesomeness inspection of poultry carcasses

    Science.gov (United States)

    A line-scan machine vision system and multispectral inspection algorithm were developed and evaluated for differentiation of wholesome and systemically diseased chickens on a high-speed processing line. The inspection system acquires line-scan images of chicken carcasses on a 140 bird-per-minute pro...

  6. Machine vision system for online inspection of freshly slaughtered chickens

    Science.gov (United States)

    A machine vision system was developed and evaluated for the automation of online inspection to differentiate freshly slaughtered wholesome chickens from systemically diseased chickens. The system consisted of an electron-multiplying charge-coupled-device camera used with an imaging spectrograph and ...

  7. Genome modularity and synthetic biology: Engineering systems.

    Science.gov (United States)

    Mol, Milsee; Kabra, Ritika; Singh, Shailza

    2018-01-01

    Whole genome sequencing projects running in various laboratories around the world has generated immense data. A systematic phylogenetic analysis of this data shows that genome complexity goes on decreasing as it evolves, due to its modular nature. This modularity can be harnessed to minimize the genome further to reduce it with the bare minimum essential genes. A reduced modular genome, can fuel progress in the area of synthetic biology by providing a ready to use plug and play chassis. Advances in gene editing technology such as the use of tailor made synthetic transcription factors will further enhance the availability of synthetic devices to be applied in the fields of environment, agriculture and health. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. A robotic vision system to measure tree traits

    Science.gov (United States)

    The autonomous measurement of tree traits, such as branching structure, branch diameters, branch lengths, and branch angles, is required for tasks such as robotic pruning of trees as well as structural phenotyping. We propose a robotic vision system called the Robotic System for Tree Shape Estimati...

  9. A Layered Active Memory Architecture for Cognitive Vision Systems

    OpenAIRE

    Kolonias, Ilias; Christmas, William; Kittler, Josef

    2007-01-01

    Recognising actions and objects from video material has attracted growing research attention and given rise to important applications. However, injecting cognitive capabilities into computer vision systems requires an architecture more elaborate than the traditional signal processing paradigm for information processing. Inspired by biological cognitive systems, we present a memory architecture enabling cognitive processes (such as selecting the processes required for scene understanding, laye...

  10. Future Automated Rough Mills Hinge on Vision Systems

    Science.gov (United States)

    Philip A. Araman

    1996-01-01

    The backbone behind major changes to present and future rough mills in dimension, furniture, cabinet or millwork facilities will be computer vision systems. Because of the wide variety of products and the quality of parts produced, the scanning systems and rough mills will vary greatly. The scanners will vary in type. For many complicated applications, multiple scanner...

  11. Advanced robot vision system for nuclear power plants

    International Nuclear Information System (INIS)

    Onoguchi, Kazunori; Kawamura, Atsuro; Nakayama, Ryoichi.

    1991-01-01

    We have developed a robot vision system for advanced robots used in nuclear power plants, under a contract with the Agency of Industrial Science and Technology of the Ministry of International Trade and Industry. This work is part of the large-scale 'advanced robot technology' project. The robot vision system consists of self-location measurement, obstacle detection, and object recognition subsystems, which are activated by a total control subsystem. This paper presents details of these subsystems and the experimental results obtained. (author)

  12. Machine-Vision Systems Selection for Agricultural Vehicles: A Guide

    Directory of Open Access Journals (Sweden)

    Gonzalo Pajares

    2016-11-01

    Full Text Available Machine vision systems are becoming increasingly common onboard agricultural vehicles (autonomous and non-autonomous for different tasks. This paper provides guidelines for selecting machine-vision systems for optimum performance, considering the adverse conditions on these outdoor environments with high variability on the illumination, irregular terrain conditions or different plant growth states, among others. In this regard, three main topics have been conveniently addressed for the best selection: (a spectral bands (visible and infrared; (b imaging sensors and optical systems (including intrinsic parameters and (c geometric visual system arrangement (considering extrinsic parameters and stereovision systems. A general overview, with detailed description and technical support, is provided for each topic with illustrative examples focused on specific applications in agriculture, although they could be applied in different contexts other than agricultural. A case study is provided as a result of research in the RHEA (Robot Fleets for Highly Effective Agriculture and Forestry Management project for effective weed control in maize fields (wide-rows crops, funded by the European Union, where the machine vision system onboard the autonomous vehicles was the most important part of the full perception system, where machine vision was the most relevant. Details and results about crop row detection, weed patches identification, autonomous vehicle guidance and obstacle detection are provided together with a review of methods and approaches on these topics.

  13. From biokinematics to a robotic active vision system.

    Science.gov (United States)

    Barzilay, Ouriel; Zelnik-Manor, Lihi; Gutfreund, Yoram; Wagner, Hermann; Wolf, Alon

    2017-09-21

    Barn owls move their heads in very particular motions, compensating for the quasi-immovability of their eyes. These efficient predators often perform peering side-to-side head motions when scanning their surroundings and seeking prey. In this work, we use the head movements of barn owls as a model to bridge between biological active vision and machine vision. The biomotions are measured and used to actuate a specially built robot equipped with a depth camera for scanning. We hypothesize that the biomotions improve scan accuracy of static objects. Our experiments show that barn owl biomotion-based trajectories consistently improve scan accuracy when compared to intuitive scanning motions. This constitutes proof-of-concept evidence that the vision of robotic systems can be enhanced by bio-inspired viewpoint manipulation. Such biomimetic scanning systems can have many applications, e.g. manufacturing inspection or in autonomous robots.

  14. A lightweight, inexpensive robotic system for insect vision.

    Science.gov (United States)

    Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex

    2017-09-01

    Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Synthetic Biology: Engineering Living Systems from Biophysical Principles.

    Science.gov (United States)

    Bartley, Bryan A; Kim, Kyung; Medley, J Kyle; Sauro, Herbert M

    2017-03-28

    Synthetic biology was founded as a biophysical discipline that sought explanations for the origins of life from chemical and physical first principles. Modern synthetic biology has been reinvented as an engineering discipline to design new organisms as well as to better understand fundamental biological mechanisms. However, success is still largely limited to the laboratory and transformative applications of synthetic biology are still in their infancy. Here, we review six principles of living systems and how they compare and contrast with engineered systems. We cite specific examples from the synthetic biology literature that illustrate these principles and speculate on their implications for further study. To fully realize the promise of synthetic biology, we must be aware of life's unique properties. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  16. Vision systems for manned and robotic ground vehicles

    Science.gov (United States)

    Sanders-Reed, John N.; Koon, Phillip L.

    2010-04-01

    A Distributed Aperture Vision System for ground vehicles is described. An overview of the hardware including sensor pod, processor, video compression, and displays is provided. This includes a discussion of the choice between an integrated sensor pod and individually mounted sensors, open architecture design, and latency issues as well as flat panel versus head mounted displays. This technology is applied to various ground vehicle scenarios, including closed-hatch operations (operator in the vehicle), remote operator tele-operation, and supervised autonomy for multi-vehicle unmanned convoys. In addition, remote vision for automatic perimeter surveillance using autonomous vehicles and automatic detection algorithms is demonstrated.

  17. Vision Aided State Estimation for Helicopter Slung Load System

    DEFF Research Database (Denmark)

    Bisgaard, Morten; Bendtsen, Jan Dimon; la Cour-Harbo, Anders

    2007-01-01

    This paper presents the design and verification of a state estimator for a helicopter based slung load system. The estimator is designed to augment the IMU driven estimator found in many helicopter UAV s and uses vision based updates only. The process model used for the estimator is a simple 4 st...

  18. Computer Vision Systems for Hardwood Logs and Lumber

    Science.gov (United States)

    Philip A. Araman; Tai-Hoon Cho; D. Zhu; R. Conners

    1991-01-01

    Computer vision systems being developed at Virginia Tech University with the support and cooperation from the U.S. Forest Service are presented. Researchers at Michigan State University, West Virginia University, and Mississippi State University are also members of the research team working on various parts of this research. Our goals are to help U.S. hardwood...

  19. A vision based row detection system for sugar beet

    NARCIS (Netherlands)

    Bakker, T.; Wouters, H.; Asselt, van C.J.; Bontsema, J.; Tang, L.; Müller, J.; Straten, van G.

    2008-01-01

    One way of guiding autonomous vehicles through the field is using a vision based row detection system. A new approach for row recognition is presented which is based on grey-scale Hough transform on intelligently merged images resulting in a considerable improvement of the speed of image processing.

  20. A Novel Binocular Vision System for Wearable Devices.

    Science.gov (United States)

    Zhai, Haitian; Li, Hui; Bai, Yicheng; Jia, Wenyan; Sun, Mingui

    2014-04-25

    We present a novel binocular imaging system for wearable devices incorporating the biology knowledge of the human eyes. Unlike the camera system in smartphones, two fish-eye lenses with a larger angle of view are used, the visual field of the new system is larger, and the central resolution of output images is higher. This design leads to more effective image acquisition, facilitating computer vision tasks such as target recognition, navigation and object tracking.

  1. Development of a machine vision guidance system for automated assembly of space structures

    Science.gov (United States)

    Cooper, Eric G.; Sydow, P. Daniel

    1992-01-01

    The topics are presented in viewgraph form and include: automated structural assembly robot vision; machine vision requirements; vision targets and hardware; reflective efficiency; target identification; pose estimation algorithms; triangle constraints; truss node with joint receptacle targets; end-effector mounted camera and light assembly; vision system results from optical bench tests; and future work.

  2. Programming Morphogenesis through Systems and Synthetic Biology.

    Science.gov (United States)

    Velazquez, Jeremy J; Su, Emily; Cahan, Patrick; Ebrahimkhani, Mo R

    2018-04-01

    Mammalian tissue development is an intricate, spatiotemporal process of self-organization that emerges from gene regulatory networks of differentiating stem cells. A major goal in stem cell biology is to gain a sufficient understanding of gene regulatory networks and cell-cell interactions to enable the reliable and robust engineering of morphogenesis. Here, we review advances in synthetic biology, single cell genomics, and multiscale modeling, which, when synthesized, provide a framework to achieve the ambitious goal of programming morphogenesis in complex tissues and organoids. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Accurate Localization of Communicant Vehicles using GPS and Vision Systems

    Directory of Open Access Journals (Sweden)

    Georges CHALLITA

    2009-07-01

    Full Text Available The new generation of ADAS systems based on cooperation between vehicles can offer serious perspectives to the road security. The inter-vehicle cooperation is made possible thanks to the revolution in the wireless mobile ad hoc network. In this paper, we will develop a system that will minimize the imprecision of the GPS used to car tracking, based on the data given by the GPS which means the coordinates and speed in addition to the use of the vision data that will be collected from the loading system in the vehicle (camera and processor. Localization information can be exchanged between the vehicles through a wireless communication device. The creation of the system must adopt the Monte Carlo Method or what we call a particle filter for the treatment of the GPS data and vision data. An experimental study of this system is performed on our fleet of experimental communicating vehicles.

  4. Visual Advantage of Enhanced Flight Vision System During NextGen Flight Test Evaluation

    Science.gov (United States)

    Kramer, Lynda J.; Harrison, Stephanie J.; Bailey, Randall E.; Shelton, Kevin J.; Ellis, Kyle K.

    2014-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment. Simulation and flight tests were jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA) to evaluate potential safety and operational benefits of SVS/EFVS technologies in low visibility Next Generation Air Transportation System (NextGen) operations. The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SVS/EFVS operational and system-level performance capabilities. Nine test flights were flown in Gulfstream's G450 flight test aircraft outfitted with the SVS/EFVS technologies under low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 feet to 3600 feet reported visibility) under different obscurants (mist, fog, drizzle fog, frozen fog) and sky cover (broken, overcast). Flight test videos were evaluated at three different altitudes (decision altitude, 100 feet radar altitude, and touchdown) to determine the visual advantage afforded to the pilot using the EFVS/Forward-Looking InfraRed (FLIR) imagery compared to natural vision. Results indicate the EFVS provided a visual advantage of two to three times over that of the out-the-window (OTW) view. The EFVS allowed pilots to view the runway environment, specifically runway lights, before they would be able to OTW with natural vision.

  5. Synthetic

    Directory of Open Access Journals (Sweden)

    Anna Maria Manferdini

    2010-06-01

    Full Text Available Traditionally materials have been associated with a series of physical properties that can be used as inputs to production and manufacturing. Recently we witnessed an interest in materials considered not only as ‘true matter’, but also as new breeds where geometry, texture, tooling and finish are able to provoke new sensations when they are applied to a substance. These artificial materials can be described as synthetic because they are the outcome of various qualities that are not necessarily true to the original matter, but they are the combination of two or more parts, whether by design or by natural processes. The aim of this paper is to investigate the potential of architectural surfaces to produce effects through the invention of new breeds of artificial matter, using micro-scale details derived from Nature as an inspiration.

  6. Adaptive fuzzy system for 3-D vision

    Science.gov (United States)

    Mitra, Sunanda

    1993-01-01

    An adaptive fuzzy system using the concept of the Adaptive Resonance Theory (ART) type neural network architecture and incorporating fuzzy c-means (FCM) system equations for reclassification of cluster centers was developed. The Adaptive Fuzzy Leader Clustering (AFLC) architecture is a hybrid neural-fuzzy system which learns on-line in a stable and efficient manner. The system uses a control structure similar to that found in the Adaptive Resonance Theory (ART-1) network to identify the cluster centers initially. The initial classification of an input takes place in a two stage process; a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from Fuzzy c-Means (FCM) system equations for the centroids and the membership values. The operational characteristics of AFLC and the critical parameters involved in its operation are discussed. The performance of the AFLC algorithm is presented through application of the algorithm to the Anderson Iris data, and laser-luminescent fingerprint image data. The AFLC algorithm successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets. The hybrid neuro-fuzzy AFLC algorithm will enhance analysis of a number of difficult recognition and control problems involved with Tethered Satellite Systems and on-orbit space shuttle attitude controller.

  7. Robust adaptive optics systems for vision science

    Science.gov (United States)

    Burns, S. A.; de Castro, A.; Sawides, L.; Luo, T.; Sapoznik, K.

    2018-02-01

    Adaptive Optics (AO) is of growing importance for understanding the impact of retinal and systemic diseases on the retina. While AO retinal imaging in healthy eyes is now routine, AO imaging in older eyes and eyes with optical changes to the anterior eye can be difficult and requires a control and an imaging system that is resilient when there is scattering and occlusion from the cornea and lens, as well as in the presence of irregular and small pupils. Our AO retinal imaging system combines evaluation of local image quality of the pupil, with spatially programmable detection. The wavefront control system uses a woofer tweeter approach, combining an electromagnetic mirror and a MEMS mirror and a single Shack Hartmann sensor. The SH sensor samples an 8 mm exit pupil and the subject is aligned to a region within this larger system pupil using a chin and forehead rest. A spot quality metric is calculated in real time for each lenslet. Individual lenslets that do not meet the quality metric are eliminated from the processing. Mirror shapes are smoothed outside the region of wavefront control when pupils are small. The system allows imaging even with smaller irregular pupils, however because the depth of field increases under these conditions, sectioning performance decreases. A retinal conjugate micromirror array selectively directs mid-range scatter to additional detectors. This improves detection of retinal capillaries even when the confocal image has poorer image quality that includes both photoreceptors and blood vessels.

  8. Vision System for Relative Motion Estimation from Optical Flow

    Directory of Open Access Journals (Sweden)

    Sergey M. Sokolov

    2010-08-01

    Full Text Available For the recent years there was an increasing interest in different methods of motion analysis based on visual data acquisition. Vision systems, intended to obtain quantitative data regarding motion in real time are especially in demand. This paper talks about the vision systems that allow the receipt of information on relative object motion in real time. It is shown, that the algorithms solving a wide range of practical problems by definition of relative movement can be generated on the basis of the known algorithms of an optical flow calculation. One of the system's goals is the creation of economically efficient intellectual sensor prototype in order to estimate relative objects motion based on optic flow. The results of the experiments with a prototype system model are shown.

  9. A machine vision system for the calibration of digital thermometers

    International Nuclear Information System (INIS)

    Vázquez-Fernández, Esteban; Dacal-Nieto, Angel; González-Jorge, Higinio; Alvarez-Valado, Victor; Martín, Fernando; Formella, Arno

    2009-01-01

    Automation is a key point in many industrial tasks such as calibration and metrology. In this context, machine vision has shown to be a useful tool for automation support, especially when there is no other option available. A system for the calibration of portable measurement devices has been developed. The system uses machine vision to obtain the numerical values shown by displays. A new approach based on human perception of digits, which works in parallel with other more classical classifiers, has been created. The results show the benefits of the system in terms of its usability and robustness, obtaining a success rate higher than 99% in display recognition. The system saves time and effort, and offers the possibility of scheduling calibration tasks without excessive attention by the laboratory technicians

  10. Machine vision system for quality inspection of bulk rice seeds

    Science.gov (United States)

    Cheng, F.; Ying, YB

    2005-11-01

    A machine vision system for quality inspection of bulk rice seeds has been developed in this research. This system is designed to inspect rice seeds on a rotating disk with a CCD camera. The seeds scattering and positioning device on this system, under continuous feeding condition, reaches 85% fill-ratio of the number of holes on the disk. Combining morphological and color characteristics gave a highly acceptable classification. The high classification accuracies obtained using a small number of features indicate the potential of the algorithm for on-line inspection of germinated rice seeds in industrial environment. The overall average classification accuracy among the four categories was above 90%. This paper presents the significant elements of the computer vision system and emphasizes the important aspects of the image processing technique.

  11. Nanomedical device and systems design challenges, possibilities, visions

    CERN Document Server

    2014-01-01

    Nanomedical Device and Systems Design: Challenges, Possibilities, Visions serves as a preliminary guide toward the inspiration of specific investigative pathways that may lead to meaningful discourse and significant advances in nanomedicine/nanotechnology. This volume considers the potential of future innovations that will involve nanomedical devices and systems. It endeavors to explore remarkable possibilities spanning medical diagnostics, therapeutics, and other advancements that may be enabled within this discipline. In particular, this book investigates just how nanomedical diagnostic and

  12. Vision system for auto-detection of cashmere pigmented fibers

    Science.gov (United States)

    Su, Zhenwei; Dehghani, Abbas A.; Zhang, Liwei; King, Tim; Greenwood, Barry

    2003-05-01

    The traditional method for the evaluation of cashmere quality is subjective and low in accuracy. In this paper, a computer vision system is presented for the objective identification and classification of pigmented fibres, which consists of a web maker, an image acquisition system and a computer for image processing. The techniques of fibre preparation, image acquisition and the development of suitable algorithm together with software for removal of the background fibres and counting of pigmented fibres, are described in detail.

  13. Machine vision system for remote inspection in hazardous environments

    International Nuclear Information System (INIS)

    Mukherjee, J.K.; Krishna, K.Y.V.; Wadnerkar, A.

    2011-01-01

    Visual Inspection of radioactive components need remote inspection systems for human safety and equipment (CCD imagers) protection from radiation. Elaborate view transport optics is required to deliver images at safe areas while maintaining fidelity of image data. Automation of the system requires robots to operate such equipment. A robotized periscope has been developed to meet the challenge of remote safe viewing and vision based inspection. (author)

  14. NOVEL CORROSION SENSOR FOR VISION 21 SYSTEMS

    Energy Technology Data Exchange (ETDEWEB)

    Heng Ban

    2004-12-01

    Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the metal loss caused by chemical reactions on surfaces exposed to the combustion environment. Such corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indication of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall objective of this proposed project is to develop a technology for on-line corrosion monitoring based on a new concept. This report describes the initial results from the first-year effort of the three-year study that include laboratory development and experiment, and pilot combustor testing.

  15. Early light vision isomorphic singular (ELVIS) system

    Science.gov (United States)

    Jannson, Tomasz P.; Ternovskiy, Igor V.; DeBacker, Theodore A.; Caulfield, H. John

    2000-07-01

    In the shallow water military scenarios, UUVs (Unmanned Underwater Vehicles) are required to protect assets against mines, swimmers, and other underwater military objects. It would be desirable if such UUVs could autonomously see in a similar way as humans, at least, at the primary visual cortex-level. In this paper, an attempt to such a UUV system development is proposed.

  16. Vision development test bed: The cradle of the MSS artificial vision system

    Science.gov (United States)

    Zucherman, Leon; Stovman, John

    This paper presents the concept of the Vision Development Test-Bed (VDTB) developed at Spar Aerospace Ltd. in order to assist development work on the Artificial Vision System (AVS) for the Mobile Servicing System (MSS) of Space Station Freedom in providing reliable and robust target auto acquisition and robotic auto-tracking capabilities when operating in the extremely contrasty illumination of the space environment. The paper illustrates how the VDTB will be used to understand the problems and to evaluate the methods of solving them. The VDTB is based on the use of conventional but high speed image processing hardware and software. Auxiliary equipment, such as TV cameras, illumination sources, monitors, will be added to provide completeness and flexibility. A special feature will be the use of solar simulation so that the impact of the harsh illumination conditions in space on image quality can be evaluated. The VDTB will be used to assess the required techniques, algorithms, hardware and software characteristics, and to utilize this information in overcoming the target-recognition and false-target rejection problems. The problems associated with NTSC video processing and the use of color will also be investigated. The paper concludes with a review of applications for the VDTB work, such as AVS real-time simulations, application software development, evaluations, and trade-offs studies.

  17. Development of a machine vision system for automated structural assembly

    Science.gov (United States)

    Sydow, P. Daniel; Cooper, Eric G.

    1992-01-01

    Research is being conducted at the LaRC to develop a telerobotic assembly system designed to construct large space truss structures. This research program was initiated within the past several years, and a ground-based test-bed was developed to evaluate and expand the state of the art. Test-bed operations currently use predetermined ('taught') points for truss structural assembly. Total dependence on the use of taught points for joint receptacle capture and strut installation is neither robust nor reliable enough for space operations. Therefore, a machine vision sensor guidance system is being developed to locate and guide the robot to a passive target mounted on the truss joint receptacle. The vision system hardware includes a miniature video camera, passive targets mounted on the joint receptacles, target illumination hardware, and an image processing system. Discrimination of the target from background clutter is accomplished through standard digital processing techniques. Once the target is identified, a pose estimation algorithm is invoked to determine the location, in three-dimensional space, of the target relative to the robots end-effector. Preliminary test results of the vision system in the Automated Structural Assembly Laboratory with a range of lighting and background conditions indicate that it is fully capable of successfully identifying joint receptacle targets throughout the required operational range. Controlled optical bench test results indicate that the system can also provide the pose estimation accuracy to define the target position.

  18. Study on flexible calibration method for binocular stereo vision system

    Science.gov (United States)

    Wang, Peng; Sun, Huashu; Sun, Changku

    2008-12-01

    Using a binocular stereo vision system for 3D coordinate measurement, system calibration is an important factor for measurement precision. In this paper we present a flexible calibration method for binocular stereo system calibration to estimate the intrinsic and extrinsic parameters of each camera and the exterior orientation of the turntable's axis which is installed in front of the binocular stereo vision system to increase the system measurement range. Using a new flexible planar pattern with four big circles and an array of small circles as reference points for calibration, binocular stereo calibration is realized with Zhang Plane-based calibration method without specialized knowledge of 3D geometry. By putting a standard ball in front of the binocular stereo vision system, a sequence pictures is taken at the same by both camera with a few different rotation angles of the turntable. With the method of space intersection of two straight lines, the reference points, the ball center at each turntable rotation angles, for axis calibration are figured out. Because of the rotation of the turntable, the trace of ball is a circle, whose center is on the turntable's axis. All ball centers rotated are in a plane perpendicular to the axis. The exterior orientation of the turntable axis is calibrated according to the calibration model. The measurement on a column bearing is performed in the experiment, with the final measurement precision better than 0.02mm.

  19. Information obtaining and fusion of color night vision system

    Science.gov (United States)

    Bai, Lianfa; Gu, Guohua; Chen, Qian; Zhang, Baomin

    2001-09-01

    Color night vision technology is a new kind of night vision means. In this paper, on the base of the study on two-color false color low light level(CLLL) TV technology, the principle and experiment study on single-channel false CLLL TV system are carried out. Deeply, the disadvantages of dual-channel false CLLL TV system are pointed out, LLL image geometric segment technique and spectrum gray-scale compensation technique are researched into. The single-channel CLLL TV system is established. Experiment results show that through the fusion of two spectrum LLL images, the image resolution and the recognizing capability of human eyes can be increased significantly, and the high sensitivity and resolution of single-channel as well as that of the dua-channel technology are realized successfully.

  20. HMD digital night vision system for fixed wing fighters

    Science.gov (United States)

    Foote, Bobby D.

    2013-05-01

    Digital night sensor technology offers both advantages and disadvantages over standard analog systems. As the digital night sensor technology matures and disadvantages are overcome, the transition away from analog type sensors will increase with new programs. In response to this growing need RCEVS is actively investing in digital night vision systems that will provide the performance needed for the future. Rockwell Collins and Elbit Systems of America continue to invest in digital night technology and have completed laboratory, ground and preliminary flight testing to evaluate the important key factors for night vision. These evaluations have led to a summary of the maturity of the digital night capability and status of the key performance gap between analog and digital systems. Introduction of Digital Night Vision Systems can be found in the roadmap of future fixed wing and rotorcraft programs beginning in 2015. This will bring a new set of capabilities to the pilot that will enhance his abilities to perform night operations with no loss of performance.

  1. A smart sensor-based vision system: implementation and evaluation

    International Nuclear Information System (INIS)

    Elouardi, A; Bouaziz, S; Dupret, A; Lacassagne, L; Klein, J O; Reynaud, R

    2006-01-01

    One of the methods of solving the computational complexity of image-processing is to perform some low-level computations on the sensor focal plane. This paper presents a vision system based on a smart sensor. PARIS1 (Programmable Analog Retina-like Image Sensor1) is the first prototype used to evaluate the architecture of an on-chip vision system based on such a sensor coupled with a microcontroller. The smart sensor integrates a set of analog and digital computing units. This architecture paves the way for a more compact vision system and increases the performances reducing the data flow exchanges with a microprocessor in control. A system has been implemented as a proof-of-concept and has enabled us to evaluate the performance requirements for a possible integration of a microcontroller on the same chip. The used approach is compared with two architectures implementing CMOS active pixel sensors (APS) and interfaced to the same microcontroller. The comparison is related to image processing computation time, processing reliability, programmability, precision, bandwidth and subsequent stages of computations

  2. Sensory systems II senses other than vision

    CERN Document Server

    Wolfe, Jeremy M

    1988-01-01

    This series of books, "Readings from the Encyclopedia of Neuroscience." consists of collections of subject-clustered articles taken from the Encyclopedia of Neuroscience. The Encyclopedia of Neuroscience is a reference source and compendium of more than 700 articles written by world authorities and covering all of neuroscience. We define neuroscience broadly as including all those fields that have as a primary goal the under­ standing of how the brain and nervous system work to mediate/control behavior, including the mental behavior of humans. Those interested in specific aspects of the neurosciences, particular subject areas or specialties, can of course browse through the alphabetically arranged articles of the En­ cyclopedia or use its index to find the topics they wish to read. However. for those readers-students, specialists, or others-who will find it useful to have collections of subject-clustered articles from the Encyclopedia, we issue this series of "Readings" in paperback. Students in neuroscienc...

  3. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications.

    Science.gov (United States)

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-09-14

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  4. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications

    Directory of Open Access Journals (Sweden)

    Basam Musleh

    2016-09-01

    Full Text Available Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels and the vehicle environment (meters depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  5. Vision-based pedestrian protection systems for intelligent vehicles

    CERN Document Server

    Geronimo, David

    2013-01-01

    Pedestrian Protection Systems (PPSs) are on-board systems aimed at detecting and tracking people in the surroundings of a vehicle in order to avoid potentially dangerous situations. These systems, together with other Advanced Driver Assistance Systems (ADAS) such as lane departure warning or adaptive cruise control, are one of the most promising ways to improve traffic safety. By the use of computer vision, cameras working either in the visible or infra-red spectra have been demonstrated as a reliable sensor to perform this task. Nevertheless, the variability of human's appearance, not only in

  6. Artificial intelligence, expert systems, computer vision, and natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  7. International Border Management Systems (IBMS) Program : visions and strategies.

    Energy Technology Data Exchange (ETDEWEB)

    McDaniel, Michael; Mohagheghi, Amir Hossein

    2011-02-01

    Sandia National Laboratories (SNL), International Border Management Systems (IBMS) Program is working to establish a long-term border security strategy with United States Central Command (CENTCOM). Efforts are being made to synthesize border security capabilities and technologies maintained at the Laboratories, and coordinate with subject matter expertise from both the New Mexico and California offices. The vision for SNL is to provide science and technology support for international projects and engagements on border security.

  8. Vector Disparity Sensor with Vergence Control for Active Vision Systems

    Science.gov (United States)

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P.; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system. PMID:22438737

  9. A Taxonomy of Vision Systems for Ground Mobile Robots

    Directory of Open Access Journals (Sweden)

    Jesus Martinez-Gomez

    2014-07-01

    Full Text Available This paper introduces a taxonomy of vision systems for ground mobile robots. In the last five years, a significant number of relevant papers have contributed to this subject. Firstly, a thorough review of the papers is proposed to discuss and classify both past and the most current approaches in the field. As a result, a global picture of the state of the art of the last five years is obtained. Moreover, the study of the articles is used to put forward a comprehensive taxonomy based on the most up-to-date research in ground mobile robotics. In this sense, the paper aims at being especially helpful to both budding and experienced researchers in the areas of vision systems and mobile ground robots. The taxonomy described is devised from a novel perspective, namely in order to respond to the main questions posed when designing robotic vision systems: why?, what for?, what with?, how?, and where? The answers are derived from the most relevant techniques described in the recent literature, leading in a natural way to a series of classifications that are discussed and contextualized. The article offers a global picture of the state of the art in the area and discovers some promising research lines.

  10. A robust embedded vision system feasible white balance algorithm

    Science.gov (United States)

    Wang, Yuan; Yu, Feihong

    2018-01-01

    White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.

  11. Automatic gear sorting system based on monocular vision

    Directory of Open Access Journals (Sweden)

    Wenqi Wu

    2015-11-01

    Full Text Available An automatic gear sorting system based on monocular vision is proposed in this paper. A CCD camera fixed on the top of the sorting system is used to obtain the images of the gears on the conveyor belt. The gears׳ features including number of holes, number of teeth and color are extracted, which is used to categorize the gears. Photoelectric sensors are used to locate the gears׳ position and produce the trigger signals for pneumatic cylinders. The automatic gear sorting is achieved by using pneumatic actuators to push different gears into their corresponding storage boxes. The experimental results verify the validity and reliability of the proposed method and system.

  12. Fiber optic coherent laser radar 3d vision system

    International Nuclear Information System (INIS)

    Sebastian, R.L.; Clark, R.B.; Simonson, D.L.

    1994-01-01

    Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic of coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system

  13. Informing biological design by integration of systems and synthetic biology.

    Science.gov (United States)

    Smolke, Christina D; Silver, Pamela A

    2011-03-18

    Synthetic biology aims to make the engineering of biology faster and more predictable. In contrast, systems biology focuses on the interaction of myriad components and how these give rise to the dynamic and complex behavior of biological systems. Here, we examine the synergies between these two fields. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Intelligent Vision System for Door Sensing Mobile Robot

    Directory of Open Access Journals (Sweden)

    Jharna Majumdar

    2012-08-01

    Full Text Available Wheeled Mobile Robots find numerous applications in the Indoor man made structured environments. In order to operate effectively, the robots must be capable of sensing its surroundings. Computer Vision is one of the prime research areas directed towards achieving these sensing capabilities. In this paper, we present a Door Sensing Mobile Robot capable of navigating in the indoor environment. A robust and inexpensive approach for recognition and classification of the door, based on monocular vision system helps the mobile robot in decision making. To prove the efficacy of the algorithm we have designed and developed a ‘Differentially’ Driven Mobile Robot. A wall following behavior using Ultra Sonic range sensors is employed by the mobile robot for navigation in the corridors.  Field Programmable Gate Arrays (FPGA have been used for the implementation of PD Controller for wall following and PID Controller to control the speed of the Geared DC Motor.

  15. A stereo vision-based obstacle detection system in vehicles

    Science.gov (United States)

    Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun

    2008-02-01

    Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.

  16. Computer-vision-based inspecting system for needle roller bearing

    Science.gov (United States)

    Li, Wei; He, Tao; Zhong, Fei; Wu, Qinhua; Zhong, Yuning; Shi, Teiling

    2006-11-01

    A Computer Vision based Inspecting System for Needle Roller Bearing (CVISNRB) is proposed in the paper. The characteristic of technology, main functions and principle of CVISNRB are also introduced. CVISNRB is composed of a mechanic transmission and an automatic feeding system, an imaging system, software arithmetic, an automatic selecting system of inspected bearing, a human-computer interaction, a pneumatic control system, an electric control system and so on. The computer vision technique is introduced in the inspecting system for needle roller bearing, which resolves the problem of the small needle roller bearing inspecting in bearing production business enterprise, raises the speed of the inspecting, and realizes the automatic untouched and on-line examination. The CVISNRB can effectively examine the loss of needle and give the accurate number. The accuracy can achieve 99.5%, and the examination speed can arrive 15 needle roller bearings each minute. The CVISNRB has none malfunction in the actual performance in the past half year, and can meet the actual need.

  17. Machine vision system for measuring conifer seedling morphology

    Science.gov (United States)

    Rigney, Michael P.; Kranzler, Glenn A.

    1995-01-01

    A PC-based machine vision system providing rapid measurement of bare-root tree seedling morphological features has been designed. The system uses backlighting and a 2048-pixel line- scan camera to acquire images with transverse resolutions as high as 0.05 mm for precise measurement of stem diameter. Individual seedlings are manually loaded on a conveyor belt and inspected by the vision system in less than 0.25 seconds. Designed for quality control and morphological data acquisition by nursery personnel, the system provides a user-friendly, menu-driven graphical interface. The system automatically locates the seedling root collar and measures stem diameter, shoot height, sturdiness ratio, root mass length, projected shoot and root area, shoot-root area ratio, and percent fine roots. Sample statistics are computed for each measured feature. Measurements for each seedling may be stored for later analysis. Feature measurements may be compared with multi-class quality criteria to determine sample quality or to perform multi-class sorting. Statistical summary and classification reports may be printed to facilitate the communication of quality concerns with grading personnel. Tests were conducted at a commercial forest nursery to evaluate measurement precision. Four quality control personnel measured root collar diameter, stem height, and root mass length on each of 200 conifer seedlings. The same seedlings were inspected four times by the machine vision system. Machine stem diameter measurement precision was four times greater than that of manual measurements. Machine and manual measurements had comparable precision for shoot height and root mass length.

  18. Control system for solar tracking based on artificial vision; Sistema de control para seguimiento solar basado en vision artificial

    Energy Technology Data Exchange (ETDEWEB)

    Pacheco Ramirez, Jesus Horacio; Anaya Perez, Maria Elena; Benitez Baltazar, Victor Hugo [Universidad de onora, Hermosillo, Sonora (Mexico)]. E-mail: jpacheco@industrial.uson.mx; meanaya@industrial.uson.mx; vbenitez@industrial.uson.mx

    2010-11-15

    This work shows how artificial vision feedback can be applied to control systems. The control is applied to a solar panel in order to track the sun position. The algorithms to calculate the position of the sun and the image processing are developed in LabView. The responses obtained from the control show that it is possible to use vision for a control scheme in closed loop. [Spanish] El presente trabajo muestra la manera en la cual un sistema de control puede ser retroalimentado mediante vision artificial. El control es aplicado en un panel solar para realizar el seguimiento del sol a lo largo del dia. Los algoritmos para calcular la posicion del sol y para el tratamiento de la imagen fueron desarrollados en LabView. Las respuestas obtenidas del control muestran que es posible utilizar vision para un esquema de control en lazo cerrado.

  19. Computational vision systems for the detection of malignant melanoma.

    Science.gov (United States)

    Maglogiannis, Ilias; Kosmopoulos, Dimitrios I

    2006-01-01

    In recent years, computational vision-based diagnostic systems for dermatology have demonstrated significant progress. We review these systems by first presenting the installation, visual features utilized for skin lesion classification and the methods for defining them. We also describe how to extract these features through digital image processing methods, i.e. segmentation, registration, border detection, color and texture processing, and present how to use the extracted features for skin lesion classification by employing artificial intelligence methods, i.e. discriminant analysis, neural networks, and support vector machines. Finally, we compare these techniques in discriminating malignant melanoma tumors versus dysplastic naevi lesions.

  20. Bionic Vision-Based Intelligent Power Line Inspection System.

    Science.gov (United States)

    Li, Qingwu; Ma, Yunpeng; He, Feijia; Xi, Shuya; Xu, Jinxin

    2017-01-01

    Detecting the threats of the external obstacles to the power lines can ensure the stability of the power system. Inspired by the attention mechanism and binocular vision of human visual system, an intelligent power line inspection system is presented in this paper. Human visual attention mechanism in this intelligent inspection system is used to detect and track power lines in image sequences according to the shape information of power lines, and the binocular visual model is used to calculate the 3D coordinate information of obstacles and power lines. In order to improve the real time and accuracy of the system, we propose a new matching strategy based on the traditional SURF algorithm. The experimental results show that the system is able to accurately locate the position of the obstacles around power lines automatically, and the designed power line inspection system is effective in complex backgrounds, and there are no missing detection instances under different conditions.

  1. Low Cost Vision Based Personal Mobile Mapping System

    Science.gov (United States)

    Amami, M. M.; Smith, M. J.; Kokkas, N.

    2014-03-01

    Mobile mapping systems (MMS) can be used for several purposes, such as transportation, highway infrastructure mapping and GIS data collecting. However, the acceptance of these systems is not wide spread and their use is still limited due the high cost and dependency on the Global Navigation Satellite System (GNSS). A low cost vision based personal MMS has been produced with an aim to overcome these limitations. The system has been designed to depend mainly on cameras and use of low cost GNSS and inertial sensors to provide a bundle adjustment solution with initial values. The system has the potential to be used indoor and outdoor. The system has been tested indoors and outdoors with different GPS coverage, surrounded features, and narrow and curvy paths. Tests show that the system is able to work in such environments providing 3D coordinates of better than 10 cm accuracy.

  2. Low Cost Vision Based Personal Mobile Mapping System

    Directory of Open Access Journals (Sweden)

    M. M. Amami

    2014-03-01

    Full Text Available Mobile mapping systems (MMS can be used for several purposes, such as transportation, highway infrastructure mapping and GIS data collecting. However, the acceptance of these systems is not wide spread and their use is still limited due the high cost and dependency on the Global Navigation Satellite System (GNSS. A low cost vision based personal MMS has been produced with an aim to overcome these limitations. The system has been designed to depend mainly on cameras and use of low cost GNSS and inertial sensors to provide a bundle adjustment solution with initial values. The system has the potential to be used indoor and outdoor. The system has been tested indoors and outdoors with different GPS coverage, surrounded features, and narrow and curvy paths. Tests show that the system is able to work in such environments providing 3D coordinates of better than 10 cm accuracy.

  3. IMPROVING CAR NAVIGATION WITH A VISION-BASED SYSTEM

    Directory of Open Access Journals (Sweden)

    H. Kim

    2015-08-01

    Full Text Available The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  4. Improving Car Navigation with a Vision-Based System

    Science.gov (United States)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  5. Vision-Based People Detection System for Heavy Machine Applications

    Directory of Open Access Journals (Sweden)

    Vincent Fremont

    2016-01-01

    Full Text Available This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance.

  6. Vision-Based People Detection System for Heavy Machine Applications.

    Science.gov (United States)

    Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick

    2016-01-20

    This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance.

  7. Vision and dual IMU integrated attitude measurement system

    Science.gov (United States)

    Guo, Xiaoting; Sun, Changku; Wang, Peng; Lu, Huang

    2018-01-01

    To determination relative attitude between two space objects on a rocking base, an integrated system based on vision and dual IMU (inertial determination unit) is built up. The determination system fuses the attitude information of vision with the angular determinations of dual IMU by extended Kalman filter (EKF) to obtain the relative attitude. One IMU (master) is attached to the measured motion object and the other (slave) to the rocking base. As the determination output of inertial sensor is relative to inertial frame, thus angular rate of the master IMU includes not only motion of the measured object relative to inertial frame but also the rocking base relative to inertial frame, where the latter can be seen as redundant harmful movement information for relative attitude determination between the measured object and the rocking base. The slave IMU here assists to remove the motion information of rocking base relative to inertial frame from the master IMU. The proposed integrated attitude determination system is tested on practical experimental platform. And experiment results with superior precision and reliability show the feasibility and effectiveness of the proposed attitude determination system.

  8. ACCURACY OF A 3D VISION SYSTEM FOR INSPECTION

    DEFF Research Database (Denmark)

    Carmignato, Simone; Savio, Enrico; De Chiffre, Leonardo

    2003-01-01

    ABSTRACT. This paper illustrates an experimental method to assess the accuracy of a three-dimensional (3D) vision system for the inspection of complex geometry. The aim is to provide a procedure to evaluate task related measurement uncertainty for virtually any measurement task. The key element...... of the method is the use of a coordinate measuring machine (CMM) to supply reference measurements as basis for realistic statements of measurement uncertainty. Since robust techniques to establish traceability in CMM measurements of complex geometry are available, a CMM-based approach is suitable...

  9. Systems Biology as an Integrated Platform for Bioinformatics, Systems Synthetic Biology, and Systems Metabolic Engineering

    Science.gov (United States)

    Chen, Bor-Sen; Wu, Chia-Chou

    2013-01-01

    Systems biology aims at achieving a system-level understanding of living organisms and applying this knowledge to various fields such as synthetic biology, metabolic engineering, and medicine. System-level understanding of living organisms can be derived from insight into: (i) system structure and the mechanism of biological networks such as gene regulation, protein interactions, signaling, and metabolic pathways; (ii) system dynamics of biological networks, which provides an understanding of stability, robustness, and transduction ability through system identification, and through system analysis methods; (iii) system control methods at different levels of biological networks, which provide an understanding of systematic mechanisms to robustly control system states, minimize malfunctions, and provide potential therapeutic targets in disease treatment; (iv) systematic design methods for the modification and construction of biological networks with desired behaviors, which provide system design principles and system simulations for synthetic biology designs and systems metabolic engineering. This review describes current developments in systems biology, systems synthetic biology, and systems metabolic engineering for engineering and biology researchers. We also discuss challenges and future prospects for systems biology and the concept of systems biology as an integrated platform for bioinformatics, systems synthetic biology, and systems metabolic engineering. PMID:24709875

  10. Systems Biology as an Integrated Platform for Bioinformatics, Systems Synthetic Biology, and Systems Metabolic Engineering

    Directory of Open Access Journals (Sweden)

    Bor-Sen Chen

    2013-10-01

    Full Text Available Systems biology aims at achieving a system-level understanding of living organisms and applying this knowledge to various fields such as synthetic biology, metabolic engineering, and medicine. System-level understanding of living organisms can be derived from insight into: (i system structure and the mechanism of biological networks such as gene regulation, protein interactions, signaling, and metabolic pathways; (ii system dynamics of biological networks, which provides an understanding of stability, robustness, and transduction ability through system identification, and through system analysis methods; (iii system control methods at different levels of biological networks, which provide an understanding of systematic mechanisms to robustly control system states, minimize malfunctions, and provide potential therapeutic targets in disease treatment; (iv systematic design methods for the modification and construction of biological networks with desired behaviors, which provide system design principles and system simulations for synthetic biology designs and systems metabolic engineering. This review describes current developments in systems biology, systems synthetic biology, and systems metabolic engineering for engineering and biology researchers. We also discuss challenges and future prospects for systems biology and the concept of systems biology as an integrated platform for bioinformatics, systems synthetic biology, and systems metabolic engineering.

  11. Systems biology as an integrated platform for bioinformatics, systems synthetic biology, and systems metabolic engineering.

    Science.gov (United States)

    Chen, Bor-Sen; Wu, Chia-Chou

    2013-10-11

    Systems biology aims at achieving a system-level understanding of living organisms and applying this knowledge to various fields such as synthetic biology, metabolic engineering, and medicine. System-level understanding of living organisms can be derived from insight into: (i) system structure and the mechanism of biological networks such as gene regulation, protein interactions, signaling, and metabolic pathways; (ii) system dynamics of biological networks, which provides an understanding of stability, robustness, and transduction ability through system identification, and through system analysis methods; (iii) system control methods at different levels of biological networks, which provide an understanding of systematic mechanisms to robustly control system states, minimize malfunctions, and provide potential therapeutic targets in disease treatment; (iv) systematic design methods for the modification and construction of biological networks with desired behaviors, which provide system design principles and system simulations for synthetic biology designs and systems metabolic engineering. This review describes current developments in systems biology, systems synthetic biology, and systems metabolic engineering for engineering and biology researchers. We also discuss challenges and future prospects for systems biology and the concept of systems biology as an integrated platform for bioinformatics, systems synthetic biology, and systems metabolic engineering.

  12. TECHNICAL VISION SYSTEM FOR THE ROBOTIC MODEL OF SURFACE VESSEL

    Directory of Open Access Journals (Sweden)

    V. S. Gromov

    2016-07-01

    Full Text Available The paper presents results of work on creation of technical vision systems within the training complex for the verification of control systems by the model of surface vessel. The developed system allows determination of the coordinates and orientation angle of the object of control by means of an external video camera on one bench mark and without the need to install additional equipment on the object of control itself. Testing of the method was carried out on the robotic complex with the model of a surface vessel with a length of 430 mm; coordinates of the control object were determined with the accuracy of 2 mm. This method can be applied as a subsystem of receiving coordinates for systems of automatic control of surface vessels when testing on the scale models.

  13. Highly Scalable Monitoring System on Chip for Multi-Stream Auto-Adaptable Vision System

    OpenAIRE

    Isavudeen, Ali; Ngan, Nicolas; DOKLADALOVA, Eva; Akil , Mohamed

    2017-01-01

    International audience; The integration of multiple and technologically heterogeneous sensors (infrared, color, etc) in vision systems tend to democratize. The objective is to benefit from the multi-modal perception allowing to improve the quality and ro-bustness of challenging applications such as the advanced driver assistance, 3-D vision, inspection systems or military observation equipment. However, the multiplication of heterogeneous processing pipelines makes the design of efficient com...

  14. Research into the Architecture of CAD Based Robot Vision Systems

    Science.gov (United States)

    1988-02-09

    Vision 󈨚 and "Automatic Generation of Recognition Features for Com- puter Vision," Mudge, Turney and Volz, published in Robotica (1987). All of the...Occluded Parts," (T.N. Mudge, J.L. Turney, and R.A. Volz), Robotica , vol. 5, 1987, pp. 117-127. 5. "Vision Algorithms for Hypercube Machines," (T.N. Mudge

  15. Automatic Parking Based on a Bird's Eye View Vision System

    Directory of Open Access Journals (Sweden)

    Chunxiang Wang

    2014-03-01

    Full Text Available This paper aims at realizing an automatic parking method through a bird's eye view vision system. With this method, vehicles can make robust and real-time detection and recognition of parking spaces. During parking process, the omnidirectional information of the environment can be obtained by using four on-board fisheye cameras around the vehicle, which are the main part of the bird's eye view vision system. In order to achieve this purpose, a polynomial fisheye distortion model is firstly used for camera calibration. An image mosaicking method based on the Levenberg-Marquardt algorithm is used to combine four individual images from fisheye cameras into one omnidirectional bird's eye view image. Secondly, features of the parking spaces are extracted with a Radon transform based method. Finally, double circular trajectory planning and a preview control strategy are utilized to realize autonomous parking. Through experimental analysis, we can see that the proposed method can get effective and robust real-time results in both parking space recognition and automatic parking.

  16. Navigation system based on machine vision of multiple reference markers

    Science.gov (United States)

    Su, Xiaopeng; Dong, Wenbo; Wang, Zhenyu; Zhou, Yuanyuan

    2017-11-01

    The position and attitude measurement of space object is a key problem in the field of real-time navigation, modern control and motion tracking. As a non-contact position and attitude estimation method, machine vision position and attitude estimation has the advantages of simple structure and convenient measurement. This paper presents a vision positioning system and method based on multiple reference markers. The camera moving along the object continuously collects images containing reference markers from the camera's field of view. The spatial position information of reference mark is determined in advance, and the position and direction of moving target are calculated according to location and attitude algorithm. The main contribution of this paper: first, a plurality of reference markers is arranged in the range of moving objects so as to enlarge the range of visual positioning; second, when more than one reference marker appears in the field of view, it is possible to improve the positioning accuracy by selecting the marker of the larger contour area or the marker of the distance closer to the imaging plane principal point; third, we use the decoder to transform the reference marker into digital number. This method can improve the robustness of the system.

  17. Vision-Based SLAM System for Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2016-03-01

    Full Text Available The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs. The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i an orientation sensor (AHRS; (ii a position sensor (GPS; and (iii a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  18. Vision-Based SLAM System for Unmanned Aerial Vehicles.

    Science.gov (United States)

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-03-15

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  19. Design of Gear Defect Detection System Based on Machine Vision

    Science.gov (United States)

    Wang, Yu; Wu, Zhiheng; Duan, Xianyun; Tong, Jigang; Li, Ping; Chen, min; Lin, Qinglin

    2018-01-01

    In order to solve such problems as low efficiency, low quality and instability of gear surface defect detection, we designed a detection system based on machine vision, sensor coupling. By multisensory coupling, and then CCD camera image collection of gear products, using VS2010 to cooperate with Halcon library for a series of analysis and processing of images. At last, the results are fed back to the control end, and the rejected device is removed to the collecting box. The system has successfully identified defective gear. The test results show that this system can identify and eliminate the defects gear quickly and efficiently. It has reached the requirement of gear product defect detection line automation and has a certain application value.

  20. SARUS: A Synthetic Aperture Real-time Ultrasound System.

    Science.gov (United States)

    Jensen, Jørgen Arendt; Holten-Lund, Hans; Nilsson, Ronnie Thorup; Hansen, Martin; Larsen, Ulrik Darling; Domsten, Rune Petter; Tomov, Borislav Gueorguiev; Stuart, Matthias Bo; Nikolov, Svetoslav Ivanov; Pihl, Michael Johannes; Du, Yigang; Rasmussen, Joachim Hee; Rasmussen, Morten Fischer

    2013-09-01

    The Synthetic Aperture Real-time Ultrasound System (SARUS) for acquiring and processing synthetic aperture (SA) data for research purposes is described. The specifications and design of the system are detailed, along with its performance for SA, nonlinear, and 3-D flow estimation imaging. SARUS acquires individual channel data simultaneously for up to 1024 transducer elements for a couple of heart beats, and is capable of transmitting any kind of excitation. The 64 boards in the system house 16 transmit and 16 receive channels each, where sampled channel data can be stored in 2 GB of RAM and processed using five field-programmable gate arrays (FPGAs). The fully parametric focusing unit calculates delays and apodization values in real time in 3-D space and can produce 350 million complex samples per channel per second for full non-recursive synthetic aperture B-mode imaging at roughly 30 high-resolution images/s. Both RF element data and beamformed data can be stored in the system for later storage and processing. The stored data can be transferred in parallel using the system's sixty-four 1-Gbit Ethernet interfaces at a theoretical rate of 3.2 GB/s to a 144-core Linux cluster.

  1. Vision-aided inertial navigation system for robotic mobile mapping

    Science.gov (United States)

    Bayoud, Fadi; Skaloud, Jan

    2008-04-01

    A mapping system by vision-aided inertial navigation was developed for areas where GNSS signals are unreachable. In this framework, a methodology on the integration of vision and inertial sensors is presented, analysed and tested. The system employs the method of “SLAM: Simultaneous Localisation And Mapping” where the only external input available to the system at the beginning of the mapping mission is a number of features with known coordinates. SLAM is a term used in the robotics community to describe the problem of mapping the environment and at the same time using this map to determine the location of the mapping device. Differing from the robotics approach, the presented development stems from the frameworks of photogrammetry and kinematic geodesy that are merged in two filters that run in parallel: the Least-Squares Adjustment (LSA) for features coordinates determination and the Kalman filter (KF) for navigation correction. To test this approach, a mapping system-prototype comprising two CCD cameras and one Inertial Measurement Unit (IMU) is introduced. Conceptually, the outputs of the LSA photogrammetric resection are used as the external measurements for the KF that corrects the inertial navigation. The filtered position and orientation are subsequently employed in the photogrammetric intersection to map the surrounding features that are used as control points for the resection in the next epoch. We confirm empirically the dependency of navigation performance on the quality of the images and the number of tracked features, as well as on the geometry of the stereo-pair. Due to its autonomous nature, the SLAM's performance is further affected by the quality of IMU initialisation and the a-priory assumptions on error distribution. Using the example of the presented system we show that centimetre accuracy can be achieved in both navigation and mapping when the image geometry is optimal.

  2. Research situation and development trend of the binocular stereo vision system

    Science.gov (United States)

    Wang, Tonghao; Liu, Bingqi; Wang, Ying; Chen, Yichao

    2017-05-01

    Since the 21st century, with the development of the computer and signal processing technology, a new comprehensive subject that called computer vision was generated. Computer vision covers a wide range of knowledge, which includes physics, mathematics, biology, computer technology and other arts subjects. It contains much content, and becomes more and more powerful, not only can realize the function of the human eye "see", also can realize the human eyes cannot. In recent years, binocular stereo vision which is a main branch of the computer vision has become the focus of the research in the field of the computer vision. In this paper, the binocular stereo vision system, the development of present situation and application at home and abroad are summarized. With the current problems of the binocular stereo vision system, his own opinions are given. Furthermore, a prospective view of the future application and development of this technology are prospected.

  3. Active vision in marmosets: a model system for visual neuroscience.

    Science.gov (United States)

    Mitchell, Jude F; Reynolds, John H; Miller, Cory T

    2014-01-22

    The common marmoset (Callithrix jacchus), a small-bodied New World primate, offers several advantages to complement vision research in larger primates. Studies in the anesthetized marmoset have detailed the anatomy and physiology of their visual system (Rosa et al., 2009) while studies of auditory and vocal processing have established their utility for awake and behaving neurophysiological investigations (Lu et al., 2001a,b; Eliades and Wang, 2008a,b; Osmanski and Wang, 2011; Remington et al., 2012). However, a critical unknown is whether marmosets can perform visual tasks under head restraint. This has been essential for studies in macaques, enabling both accurate eye tracking and head stabilization for neurophysiology. In one set of experiments we compared the free viewing behavior of head-fixed marmosets to that of macaques, and found that their saccadic behavior is comparable across a number of saccade metrics and that saccades target similar regions of interest including faces. In a second set of experiments we applied behavioral conditioning techniques to determine whether the marmoset could control fixation for liquid reward. Two marmosets could fixate a central point and ignore peripheral flashing stimuli, as needed for receptive field mapping. Both marmosets also performed an orientation discrimination task, exhibiting a saturating psychometric function with reliable performance and shorter reaction times for easier discriminations. These data suggest that the marmoset is a viable model for studies of active vision and its underlying neural mechanisms.

  4. Specifications of Standards in Systems and Synthetic Biology.

    Science.gov (United States)

    Schreiber, Falk; Bader, Gary D; Golebiewski, Martin; Hucka, Michael; Kormeier, Benjamin; Le Novère, Nicolas; Myers, Chris; Nickerson, David; Sommer, Björn; Waltemath, Dagmar; Weise, Stephan

    2015-09-04

    Standards shape our everyday life. From nuts and bolts to electronic devices and technological processes, standardised products and processes are all around us. Standards have technological and economic benefits, such as making information exchange, production, and services more efficient. However, novel, innovative areas often either lack proper standards, or documents about standards in these areas are not available from a centralised platform or formal body (such as the International Standardisation Organisation). Systems and synthetic biology is a relatively novel area, and it is only in the last decade that the standardisation of data, information, and models related to systems and synthetic biology has become a community-wide effort. Several open standards have been established and are under continuous development as a community initiative. COMBINE, the ‘COmputational Modeling in BIology’ NEtwork has been established as an umbrella initiative to coordinate and promote the development of the various community standards and formats for computational models. There are yearly two meeting, HARMONY (Hackathons on Resources for Modeling in Biology), Hackathon-type meetings with a focus on development of the support for standards, and COMBINE forums, workshop-style events with oral presentations, discussion, poster, and breakout sessions for further developing the standards. For more information see http://co.mbine.org/. So far the different standards were published and made accessible through the standards’ web- pages or preprint services. The aim of this special issue is to provide a single, easily accessible and citable platform for the publication of standards in systems and synthetic biology. This special issue is intended to serve as a central access point to standards and related initiatives in systems and synthetic biology, it will be published annually to provide an opportunity for standard development groups to communicate updated specifications.

  5. An FPGA Implementation of a Robot Control System with an Integrated 3D Vision System

    Directory of Open Access Journals (Sweden)

    Yi-Ting Chen

    2015-05-01

    Full Text Available Robot decision making and motion control are commonly based on visual information in various applications. Position-based visual servo is a technique for vision-based robot control, which operates in the 3D workspace, uses real-time image processing to perform tasks of feature extraction, and returns the pose of the object for positioning control. In order to handle the computational burden at the vision sensor feedback, we design a FPGA-based motion-vision integrated system that employs dedicated hardware circuits for processing vision processing and motion control functions. This research conducts a preliminary study to explore the integration of 3D vision and robot motion control system design based on a single field programmable gate array (FPGA chip. The implemented motion-vision embedded system performs the following functions: filtering, image statistics, binary morphology, binary object analysis, object 3D position calculation, robot inverse kinematics, velocity profile generation, feedback counting, and multiple-axes position feedback control.

  6. Stereoscopic Machine-Vision System Using Projected Circles

    Science.gov (United States)

    Mackey, Jeffrey R.

    2010-01-01

    A machine-vision system capable of detecting obstacles large enough to damage or trap a robotic vehicle is undergoing development. The system includes (1) a pattern generator that projects concentric circles of laser light forward onto the terrain, (2) a stereoscopic pair of cameras that are aimed forward to acquire images of the circles, (3) a frame grabber and digitizer for acquiring image data from the cameras, and (4) a single-board computer that processes the data. The system is being developed as a prototype of machine- vision systems to enable robotic vehicles ( rovers ) on remote planets to avoid craters, large rocks, and other terrain features that could capture or damage the vehicles. Potential terrestrial applications of systems like this one could include terrain mapping, collision avoidance, navigation of robotic vehicles, mining, and robotic rescue. This system is based partly on the same principles as those of a prior stereoscopic machine-vision system in which the cameras acquire images of a single stripe of laser light that is swept forward across the terrain. However, this system is designed to afford improvements over some of the undesirable features of the prior system, including the need for a pan-and-tilt mechanism to aim the laser to generate the swept stripe, ambiguities in interpretation of the single-stripe image, the time needed to sweep the stripe across the terrain and process the data from many images acquired during that time, and difficulty of calibration because of the narrowness of the stripe. In this system, the pattern generator does not contain any moving parts and need not be mounted on a pan-and-tilt mechanism: the pattern of concentric circles is projected steadily in the forward direction. The system calibrates itself by use of data acquired during projection of the concentric-circle pattern onto a known target representing flat ground. The calibration- target image data are stored in the computer memory for use as a

  7. A Vision-Based Wireless Charging System for Robot Trophallaxis

    Directory of Open Access Journals (Sweden)

    Jae-O Kim

    2015-12-01

    Full Text Available The need to recharge the batteries of a mobile robot has presented an important challenge for a long time. In this paper, a vision-based wireless charging method for robot energy trophallaxis between two robots is presented. Even though wireless power transmission allows more positional error between receiver-transmitter coils than with a contact-type charging system, both coils have to be aligned as accurately as possible for efficient power transfer. To align the coils, a transmitter robot recognizes the coarse pose of a receiver robot via a camera image and the ambiguity of the estimated pose is removed with a Bayesian estimator. The precise pose of the receiver coil is calculated using a marker image attached to a receiver robot. Experiments with several types of receiver robots have been conducted to verify the proposed method.

  8. THE PHENOMENON OF EUROPEAN MUSICAL ROMANTICISM IN SYSTEMIC RESEARCH VISION

    Directory of Open Access Journals (Sweden)

    FLOREA AUGUSTINA

    2015-09-01

    Full Text Available The Romanticism – European cultural-artistic phenomenon of the 20th century, developed in various fields of philosophy, literature, arts, and in terms of its amplitude and universality marked the respective century as a Romantic Era – is promoted in the most pointed manner in musical art. The Research of musical Romanticism – in the conceptual, aesthetic, musical aspect – can be achieved only on the basis of a systemic vision, which inputs the necessity of a study of synthesis. The respective study will integrate in a single process the investigation of all the above – mentioned aspects and will take place at the intersection of different scientific domains: aesthetics and musical aesthetics, historical and theoretical musicology, history and theory of interpretative art.

  9. Visual tracking in stereo. [by computer vision system

    Science.gov (United States)

    Saund, E.

    1981-01-01

    A method is described for visual object tracking by a computer vision system using TV cameras and special low-level image processing hardware. The tracker maintains an internal model of the location, orientation, and velocity of the object in three-dimensional space. This model is used to predict where features of the object will lie on the two-dimensional images produced by stereo TV cameras. The differences in the locations of features in the two-dimensional images as predicted by the internal model and as actually seen create an error signal in the two-dimensional representation. This is multiplied by a generalized inverse Jacobian matrix to deduce the error in the internal model. The procedure repeats to update the internal model of the object's location, orientation and velocity continuously.

  10. Critical mm-wave components for synthetic automatic test systems

    CERN Document Server

    Hrobak, Michael

    2015-01-01

    Michael Hrobak studied hybrid integrated front end modules for high frequency measurement equipment and especially for synthetic automatic test systems. Recent developments of innovative, critical millimeter-wave components like frequency multipliers, directional couplers, filters, triple balanced mixers and power detectors are illustrated by the author separately and in combination.  Contents Synthetic Instruments Resistive Diode Frequency Multipliers Planar Directional Couplers and Filters Triple Balanced Mixers Zero Bias Schottky Power Detectors Integrated Front End Assemblies  Target Groups Scientists and students in the field of electrical engineering with main emphasis on high frequency technology Engineers and Practitioners dealing with the development of micro- and millimeter-wave measurement instruments  About the Author Dr. Michael Hrobak is with the Microwave Department of the Ferdinand-Braun-Institut (FBH), Berlin, Germany, where he is involved in the development and measurement of monolithic i...

  11. Expert System Architecture for Rocket Engine Numerical Simulators: A Vision

    Science.gov (United States)

    Mitra, D.; Babu, U.; Earla, A. K.; Hemminger, Joseph A.

    1998-01-01

    Simulation of any complex physical system like rocket engines involves modeling the behavior of their different components using mostly numerical equations. Typically a simulation package would contain a set of subroutines for these modeling purposes and some other ones for supporting jobs. A user would create an input file configuring a system (part or whole of a rocket engine to be simulated) in appropriate format understandable by the package and run it to create an executable module corresponding to the simulated system. This module would then be run on a given set of input parameters in another file. Simulation jobs are mostly done for performance measurements of a designed system, but could be utilized for failure analysis or a design job such as inverse problems. In order to use any such package the user needs to understand and learn a lot about the software architecture of the package, apart from being knowledgeable in the target domain. We are currently involved in a project in designing an intelligent executive module for the rocket engine simulation packages, which would free any user from this burden of acquiring knowledge on a particular software system. The extended abstract presented here will describe the vision, methodology and the problems encountered in the project. We are employing object-oriented technology in designing the executive module. The problem is connected to the areas like the reverse engineering of any simulation software, and the intelligent systems for simulation.

  12. A Novel Vision Sensing System for Tomato Quality Detection.

    Science.gov (United States)

    Srivastava, Satyam; Boyat, Sachin; Sadistap, Shashikant

    2014-01-01

    Producing tomato is a daunting task as the crop of tomato is exposed to attacks from various microorganisms. The symptoms of the attacks are usually changed in color, bacterial spots, special kind of specks, and sunken areas with concentric rings having different colors on the tomato outer surface. This paper addresses a vision sensing based system for tomato quality inspection. A novel approach has been developed for tomato fruit detection and disease detection. Developed system consists of USB based camera module having 12.0 megapixel interfaced with ARM-9 processor. Zigbee module has been interfaced with developed system for wireless transmission from host system to PC based server for further processing. Algorithm development consists of three major steps, preprocessing steps like noise rejection, segmentation and scaling, classification and recognition, and automatic disease detection and classification. Tomato samples have been collected from local market and data acquisition has been performed for data base preparation and various processing steps. Developed system can detect as well as classify the various diseases in tomato samples. Various pattern recognition and soft computing techniques have been implemented for data analysis as well as different parameters prediction like shelf life of the tomato, quality index based on disease detection and classification, freshness detection, maturity index detection, and different suggestions for detected diseases. Results are validated with aroma sensing technique using commercial Alpha Mos 3000 system. Accuracy has been calculated from extracted results, which is around 92%.

  13. A Machine Vision System for Automatically Grading Hardwood Lumber - (Industrial Metrology)

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas T. Drayer; Philip A. Araman; Robert L. Brisbon

    1992-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  14. X-Eye: a novel wearable vision system

    Science.gov (United States)

    Wang, Yuan-Kai; Fan, Ching-Tang; Chen, Shao-Ang; Chen, Hou-Ye

    2011-03-01

    This paper proposes a smart portable device, named the X-Eye, which provides a gesture interface with a small size but a large display for the application of photo capture and management. The wearable vision system is implemented with embedded systems and can achieve real-time performance. The hardware of the system includes an asymmetric dualcore processer with an ARM core and a DSP core. The display device is a pico projector which has a small volume size but can project large screen size. A triple buffering mechanism is designed for efficient memory management. Software functions are partitioned and pipelined for effective execution in parallel. The gesture recognition is achieved first by a color classification which is based on the expectation-maximization algorithm and Gaussian mixture model (GMM). To improve the performance of the GMM, we devise a LUT (Look Up Table) technique. Fingertips are extracted and geometrical features of fingertip's shape are matched to recognize user's gesture commands finally. In order to verify the accuracy of the gesture recognition module, experiments are conducted in eight scenes with 400 test videos including the challenge of colorful background, low illumination, and flickering. The processing speed of the whole system including the gesture recognition is with the frame rate of 22.9FPS. Experimental results give 99% recognition rate. The experimental results demonstrate that this small-size large-screen wearable system has effective gesture interface with real-time performance.

  15. Vision for an Open, Global Greenhouse Gas Information System (GHGIS)

    Science.gov (United States)

    Duren, R. M.; Butler, J. H.; Rotman, D.; Ciais, P.; Greenhouse Gas Information System Team

    2010-12-01

    Over the next few years, an increasing number of entities ranging from international, national, and regional governments, to businesses and private land-owners, are likely to become more involved in efforts to limit atmospheric concentrations of greenhouse gases. In such a world, geospatially resolved information about the location, amount, and rate of greenhouse gas (GHG) emissions will be needed, as well as the stocks and flows of all forms of carbon through the earth system. The ability to implement policies that limit GHG concentrations would be enhanced by a global, open, and transparent greenhouse gas information system (GHGIS). An operational and scientifically robust GHGIS would combine ground-based and space-based observations, carbon-cycle modeling, GHG inventories, synthesis analysis, and an extensive data integration and distribution system, to provide information about anthropogenic and natural sources, sinks, and fluxes of greenhouse gases at temporal and spatial scales relevant to decision making. The GHGIS effort was initiated in 2008 as a grassroots inter-agency collaboration intended to identify the needs for such a system, assess the capabilities of current assets, and suggest priorities for future research and development. We will present a vision for an open, global GHGIS including latest analysis of system requirements, critical gaps, and relationship to related efforts at various agencies, the Group on Earth Observations, and the Intergovernmental Panel on Climate Change.

  16. A future vision of nuclear material information systems

    International Nuclear Information System (INIS)

    Suski, N.; Wimple, C.

    1999-01-01

    To address the current and future needs for nuclear materials management and safeguards information, Lawrence Livermore National Laboratory envisions an integrated nuclear information system that will support several functions. The vision is to link distributed information systems via a common communications infrastructure designed to address the information interdependencies between two major elements: Domestic, with information about specific nuclear materials and their properties, and International, with information pertaining to foreign nuclear materials, facility design and operations. The communication infrastructure will enable data consistency, validation and reconciliation, as well as provide a common access point and user interface for a broad range of nuclear materials information. Information may be transmitted to, from, and within the system by a variety of linkage mechanisms, including the Internet. Strict access control will be employed as well as data encryption and user authentication to provide the necessary information assurance. The system can provide a mechanism not only for data storage and retrieval, but will eventually provide the analytical tools necessary to support the U.S. government's nuclear materials management needs and non-proliferation policy goals

  17. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Science.gov (United States)

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  18. New vision solar system mission study. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Mondt, J.F.; Zubrin, R.M.

    1996-03-01

    The vision for the future of the planetary exploration program includes the capability to deliver {open_quotes}constellations{close_quotes} or {open_quotes}fleets{close_quotes} of microspacecraft to a planetary destination. These fleets will act in a coordinated manner to gather science data from a variety of locations on or around the target body, thus providing detailed, global coverage without requiring development of a single large, complex and costly spacecraft. Such constellations of spacecraft, coupled with advanced information processing and visualization techniques and high-rate communications, could provide the basis for development of a {open_quotes}virtual{close_quotes} {open_quotes}presence{close_quotes} in the solar system. A goal could be the near real-time delivery of planetary images and video to a wide variety of users in the general public and the science community. This will be a major step in making the solar system accessible to the public and will help make solar system exploration a part of the human experience on Earth.

  19. Molten salt parabolic trough system with synthetic oil preheating

    Science.gov (United States)

    Yuasa, Minoru; Hino, Koichi

    2017-06-01

    Molten salt parabolic trough system (MSPT), which can heat the heat transfer fluid (HTF) to 550 °C has a better performance than a synthetic oil parabolic trough system (SOPT), which can heat the HTF to 400 °C or less. The utilization of HTF at higher temperature in the parabolic trough system is able to realize the design of a smaller size of storage tank and higher heat to electricity conversion efficiency. However, with MSPT there is a great amount of heat loss at night so it is necessary to circulate the HTF at a high temperature of about 290 °C in order to prevent solidification. A new MSPT concept with SOPT preheating (MSSOPT) has been developed to reduce the heat loss at night. In this paper, the MSSOPT system, its performance by steady state analysis and annual performance analysis are introduced.

  20. Vision-Based Cooperative Pose Estimation for Localization in Multi-Robot Systems Equipped with RGB-D Cameras

    Directory of Open Access Journals (Sweden)

    Xiaoqin Wang

    2014-12-01

    Full Text Available We present a new vision based cooperative pose estimation scheme for systems of mobile robots equipped with RGB-D cameras. We first model a multi-robot system as an edge-weighted graph. Then, based on this model, and by using the real-time color and depth data, the robots with shared field-of-views estimate their relative poses in pairwise. The system does not need the existence of a single common view shared by all robots, and it works in 3D scenes without any specific calibration pattern or landmark. The proposed scheme distributes working loads evenly in the system, hence it is scalable and the computing power of the participating robots is efficiently used. The performance and robustness were analyzed both on synthetic and experimental data in different environments over a range of system configurations with varying number of robots and poses.

  1. Honey characterization using computer vision system and artificial neural networks.

    Science.gov (United States)

    Shafiee, Sahameh; Minaei, Saeid; Moghaddam-Charkari, Nasrollah; Barzegar, Mohsen

    2014-09-15

    This paper reports the development of a computer vision system (CVS) for non-destructive characterization of honey based on colour and its correlated chemical attributes including ash content (AC), antioxidant activity (AA), and total phenolic content (TPC). Artificial neural network (ANN) models were applied to transform RGB values of images to CIE L*a*b* colourimetric measurements and to predict AC, TPC and AA from colour features of images. The developed ANN models were able to convert RGB values to CIE L*a*b* colourimetric parameters with low generalization error of 1.01±0.99. In addition, the developed models for prediction of AC, TPC and AA showed high performance based on colour parameters of honey images, as the R(2) values for prediction were 0.99, 0.98, and 0.87, for AC, AA and TPC, respectively. The experimental results show the effectiveness and possibility of applying CVS for non-destructive honey characterization by the industry. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  3. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    International Nuclear Information System (INIS)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min

    2015-01-01

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  4. Experimental ultrasound system for real-time synthetic imaging

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Holm, Ole; Jensen, Lars Joost

    1999-01-01

    Digital signal processing is being employed more and more in modern ultrasound scanners. This has made it possible to do dynamic receive focusing for each sample and implement other advanced imaging methods. The processing, however, has to be very fast and cost-effective at the same time. Dedicated...... chips are used in order to do real time processing. This often makes it difficult to implement radically different imaging strategies on one platform and makes the scanners less accessible for research purposes. Here flexibility is the prime concern, and the storage of data from all transducer elements...... over 5 to 10 seconds is needed to perform clinical evaluation of synthetic and 3D imaging. This paper describes a real-time system specifically designed for research purposes. The purpose of the system is to make it possible to acquire multi-channel data in real-time from clinical multi...

  5. Robot vision system R and D for ITER blanket remote-handling system

    Energy Technology Data Exchange (ETDEWEB)

    Maruyama, Takahito, E-mail: maruyama.takahito@jaea.go.jp [Japan Atomic Energy Agency, Fusion Research and Development Directorate, Naka, Ibaraki-ken 311-0193 (Japan); Aburadani, Atsushi; Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka [Japan Atomic Energy Agency, Fusion Research and Development Directorate, Naka, Ibaraki-ken 311-0193 (Japan); Tesini, Alessandro [ITER Organization, Route de Vinon sur Verdon, 13115 St Paul Lez Durance (France)

    2014-10-15

    For regular maintenance of the International Thermonuclear Experimental Reactor (ITER), a system called the ITER blanket remote-handling system is necessary to remotely handle the blanket modules because of the high levels of gamma radiation. Modules will be handled by robotic power manipulators and they must have a non-contact-sensing system for installing and grasping to avoid contact with other modules. A robot vision system that uses cameras was adopted for this non-contact-sensing system. Experiments for grasping modules were carried out in a dark room to simulate the environment inside the vacuum vessel and the robot vision system's measurement errors were studied. As a result, the accuracy of the manipulator's movements was within 2.01 mm and 0.31°, which satisfies the system requirements. Therefore, it was concluded that this robot vision system is suitable for the non-contact-sensing system of the ITER blanket remote-handling system.

  6. Robot vision system R and D for ITER blanket remote-handling system

    International Nuclear Information System (INIS)

    Maruyama, Takahito; Aburadani, Atsushi; Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka; Tesini, Alessandro

    2014-01-01

    For regular maintenance of the International Thermonuclear Experimental Reactor (ITER), a system called the ITER blanket remote-handling system is necessary to remotely handle the blanket modules because of the high levels of gamma radiation. Modules will be handled by robotic power manipulators and they must have a non-contact-sensing system for installing and grasping to avoid contact with other modules. A robot vision system that uses cameras was adopted for this non-contact-sensing system. Experiments for grasping modules were carried out in a dark room to simulate the environment inside the vacuum vessel and the robot vision system's measurement errors were studied. As a result, the accuracy of the manipulator's movements was within 2.01 mm and 0.31°, which satisfies the system requirements. Therefore, it was concluded that this robot vision system is suitable for the non-contact-sensing system of the ITER blanket remote-handling system

  7. Novel directions in molecular systems design: The case of light-transducing synthetic cells

    OpenAIRE

    Stano, Pasquale; Altamura, Emiliano; Mavelli, Fabio

    2017-01-01

    ABSTRACT Important progresses have been achieved in the past years in the field of bottom-up synthetic biology, especially aiming at constructing cell-like systems based on lipid vesicles (liposomes) entrapping both biomolecules or synthetic compounds. These “synthetic cells” mimic the behaviour of biological cells but are constituted by a minimal number of components. One key aspect related to this research is the energetic needs of synthetic cells. Up to now, high-energy compounds have been...

  8. Target detect system in 3D using vision apply on plant reproduction by tissue culture

    Science.gov (United States)

    Vazquez Rueda, Martin G.; Hahn, Federico

    2001-03-01

    This paper presents the preliminary results for a system in tree dimension that use a system vision to manipulate plants in a tissue culture process. The system is able to estimate the position of the plant in the work area, first calculate the position and send information to the mechanical system, and recalculate the position again, and if it is necessary, repositioning the mechanical system, using an neural system to improve the location of the plant. The system use only the system vision to sense the position and control loop using a neural system to detect the target and positioning the mechanical system, the results are compared with an open loop system.

  9. Computer vision and imaging in intelligent transportation systems

    CERN Document Server

    Bala, Raja; Trivedi, Mohan

    2017-01-01

    Acts as a single source reference providing readers with an overview of how computer vision can contribute to the different applications in the field of road transportation. This book presents a survey of computer vision techniques related to three key broad problems in the roadway transportation domain: safety, efficiency, and law enforcement. The individual chapters present significant applications within these problem domains, each presented in a tutorial manner, describing the motivation for and benefits of the application, and a description of the state of the art.

  10. Ping-Pong Robotics with High-Speed Vision System

    DEFF Research Database (Denmark)

    Li, Hailing; Wu, Haiyan; Lou, Lei

    2012-01-01

    The performance of vision-based control is usually limited by the low sampling rate of the visual feedback. We address Ping-Pong robotics as a widely studied example which requires high-speed vision for highly dynamic motion control. In order to detect a flying ball accurately and robustly...... of the manipulator are updated iteratively with decreasing error. Experiments are conducted on a 7 degrees of freedom humanoid robot arm. A successful Ping-Pong playing between the robot arm and human is achieved with a high successful rate of 88%....

  11. A Real-Time Embedded System for Stereo Vision Preprocessing Using an FPGA

    DEFF Research Database (Denmark)

    Kjær-Nielsen, Anders; Jensen, Lars Baunegaard With; Sørensen, Anders Stengaard

    2008-01-01

    In this paper a low level vision processing node for use in existing IEEE 1394 camera setups is presented. The processing node is a small embedded system, that utilizes an FPGA to perform stereo vision preprocessing at rates limited by the bandwidth of IEEE 1394a (400Mbit). The system is used...... extraction, and undistortion and rectification. The latency of the system when running at 2x15fps is 30ms....

  12. Why Synthetic Fuels Are Necessary in Future Energy Systems

    Directory of Open Access Journals (Sweden)

    I. A. Grant Wilson

    2017-07-01

    Full Text Available We propose a hypothesis that fuels will continue to be critical elements of future energy systems. The reasons behind this are explored, such as the immense benefits conferred by fuels from their low cost of storage, transport, and handling, and especially in the management of the seasonal swing in heating demand for a country with a summer and winter season such as the UK. Empirical time-series data from Great Britain are used to examine the seasonal nature of the demand for liquid fuels, natural gas, and electricity, with the aid of a daily Shared Axis Energy Diagram. The logic of the continued need of fuels is examined, and the advantages and disadvantages of synthetic fuels are considered in comparison to fossil fuels.

  13. USING VISION METROLOGY SYSTEM FOR QUALITY CONTROL IN AUTOMOTIVE INDUSTRIES

    Directory of Open Access Journals (Sweden)

    N. Mostofi

    2012-07-01

    Full Text Available The need of more accurate measurements in different stages of industrial applications, such as designing, producing, installation, and etc., is the main reason of encouraging the industry deputy in using of industrial Photogrammetry (Vision Metrology System. With respect to the main advantages of Photogrammetric methods, such as greater economy, high level of automation, capability of noncontact measurement, more flexibility and high accuracy, a good competition occurred between this method and other industrial traditional methods. With respect to the industries that make objects using a main reference model without having any mathematical model of it, main problem of producers is the evaluation of the production line. This problem will be so complicated when both reference and product object just as a physical object is available and comparison of them will be possible with direct measurement. In such case, producers make fixtures fitting reference with limited accuracy. In practical reports sometimes available precision is not better than millimetres. We used a non-metric high resolution digital camera for this investigation and the case study that studied in this paper is a chassis of automobile. In this research, a stable photogrammetric network designed for measuring the industrial object (Both Reference and Product and then by using the Bundle Adjustment and Self-Calibration methods, differences between the Reference and Product object achieved. These differences will be useful for the producer to improve the production work flow and bringing more accurate products. Results of this research, demonstrate the high potential of proposed method in industrial fields. Presented results prove high efficiency and reliability of this method using RMSE criteria. Achieved RMSE for this case study is smaller than 200 microns that shows the fact of high capability of implemented approach.

  14. Using Vision Metrology System for Quality Control in Automotive Industries

    Science.gov (United States)

    Mostofi, N.; Samadzadegan, F.; Roohy, Sh.; Nozari, M.

    2012-07-01

    The need of more accurate measurements in different stages of industrial applications, such as designing, producing, installation, and etc., is the main reason of encouraging the industry deputy in using of industrial Photogrammetry (Vision Metrology System). With respect to the main advantages of Photogrammetric methods, such as greater economy, high level of automation, capability of noncontact measurement, more flexibility and high accuracy, a good competition occurred between this method and other industrial traditional methods. With respect to the industries that make objects using a main reference model without having any mathematical model of it, main problem of producers is the evaluation of the production line. This problem will be so complicated when both reference and product object just as a physical object is available and comparison of them will be possible with direct measurement. In such case, producers make fixtures fitting reference with limited accuracy. In practical reports sometimes available precision is not better than millimetres. We used a non-metric high resolution digital camera for this investigation and the case study that studied in this paper is a chassis of automobile. In this research, a stable photogrammetric network designed for measuring the industrial object (Both Reference and Product) and then by using the Bundle Adjustment and Self-Calibration methods, differences between the Reference and Product object achieved. These differences will be useful for the producer to improve the production work flow and bringing more accurate products. Results of this research, demonstrate the high potential of proposed method in industrial fields. Presented results prove high efficiency and reliability of this method using RMSE criteria. Achieved RMSE for this case study is smaller than 200 microns that shows the fact of high capability of implemented approach.

  15. A Knowledge-Intensive Approach to Computer Vision Systems

    NARCIS (Netherlands)

    Koenderink-Ketelaars, N.J.J.P.

    2010-01-01

    This thesis focusses on the modelling of knowledge-intensive computer vision tasks. Knowledge-intensive tasks are tasks that require a high level of expert knowledge to be performed successfully. Such tasks are generally performed by a task expert. Task experts have a lot of experience in performing

  16. Synthetic and systems biology for microbial production of commodity chemicals.

    Science.gov (United States)

    Chubukov, Victor; Mukhopadhyay, Aindrila; Petzold, Christopher J; Keasling, Jay D; Martín, Héctor García

    2016-01-01

    The combination of synthetic and systems biology is a powerful framework to study fundamental questions in biology and produce chemicals of immediate practical application such as biofuels, polymers, or therapeutics. However, we cannot yet engineer biological systems as easily and precisely as we engineer physical systems. In this review, we describe the path from the choice of target molecule to scaling production up to commercial volumes. We present and explain some of the current challenges and gaps in our knowledge that must be overcome in order to bring our bioengineering capabilities to the level of other engineering disciplines. Challenges start at molecule selection, where a difficult balance between economic potential and biological feasibility must be struck. Pathway design and construction have recently been revolutionized by next-generation sequencing and exponentially improving DNA synthesis capabilities. Although pathway optimization can be significantly aided by enzyme expression characterization through proteomics, choosing optimal relative protein expression levels for maximum production is still the subject of heuristic, non-systematic approaches. Toxic metabolic intermediates and proteins can significantly affect production, and dynamic pathway regulation emerges as a powerful but yet immature tool to prevent it. Host engineering arises as a much needed complement to pathway engineering for high bioproduct yields; and systems biology approaches such as stoichiometric modeling or growth coupling strategies are required. A final, and often underestimated, challenge is the successful scale up of processes to commercial volumes. Sustained efforts in improving reproducibility and predictability are needed for further development of bioengineering.

  17. Design of a dynamic test platform for autonomous robot vision systems

    Science.gov (United States)

    Rich, G. C.

    1980-01-01

    The concept and design of a dynamic test platform for development and evluation of a robot vision system is discussed. The platform is to serve as a diagnostic and developmental tool for future work with the RPI Mars Rover's multi laser/multi detector vision system. The platform allows testing of the vision system while its attitude is varied, statically or periodically. The vision system is mounted on the test platform. It can then be subjected to a wide variety of simulated can thus be examined in a controlled, quantitative fashion. Defining and modeling Rover motions and designing the platform to emulate these motions are also discussed. Individual aspects of the design process are treated separately, as structural, driving linkages, and motors and transmissions.

  18. Development of Vision System for Dimensional Measurement for Irradiated Fuel Assembly

    International Nuclear Information System (INIS)

    Shin, Jungcheol; Kwon, Yongbock; Park, Jongyoul; Woo, Sangkyun; Kim, Yonghwan; Jang, Youngki; Choi, Joonhyung; Lee, Kyuseog

    2006-01-01

    In order to develop an advanced nuclear fuel, a series of pool side examination (PSE) is performed to confirm in-pile behavior of the fuel for commercial production. For this purpose, a vision system was developed to measure for mechanical integrity, such as assembly bowing, twist and growth, of the loaded lead test assembly. Using this vision system, three(3) times of PSE were carried out at Uljin Unit 3 and Kori Unit 2 for the advanced fuels, PLUS7 TM and 16ACE7 TM , developed by KNFC. Among the main characteristics of the vision system is very simple structure and measuring principal. This feature enables the equipment installation and inspection time to reduce largely, and leads the PSE can be finished without disturbance on the fuel loading and unloading activities during utility overhaul periods. And another feature is high accuracy and repeatability achieved by this vision system

  19. Intelligent Machine Vision System for Automated Quality Control in Ceramic Tiles Industry

    OpenAIRE

    KESER, Tomislav; HOCENSKI, Željko; HOCENSKI, Verica

    2010-01-01

    Intelligent system for automated visual quality control of ceramic tiles based on machine vision is presented in this paper. The ceramic tiles production process is almost fully and well automated in almost all production stages with exception of quality control stage at the end. The ceramic tiles quality is checked by using visual quality control principles where main goal is to successfully replace man as part of production chain with an automated machine vision system to ...

  20. Synthetic Cannabinoids and Their Effects on the Cardiovascular System.

    Science.gov (United States)

    Von Der Haar, Jonathan; Talebi, Soheila; Ghobadi, Farzaneh; Singh, Shailinder; Chirurgi, Roger; Rajeswari, Pingle; Kalantari, Hossein; Hassen, Getaw Worku

    2016-02-01

    In the past couple of years, there has been an outbreak of synthetic cannabinoid (SC) use in major cities in the United States. Patients can present with various symptoms affecting the central nervous and cardiovascular systems. The effects of endocannabinoid on contractility and Ca(2+) signaling have been shown through both cannabinoid receptors and a direct effect on ion channels. These effects result in abnormalities in ionotropy, chronotropy, and conduction. Here we report on two cases of SC abuse and abnormalities in the cardiovascular system. These cases raise concerns about the adverse effects of SCs and the possibility of QTc prolongation and subsequent complications when using antipsychotic medication in the presence of SC abuse. WHY SHOULD AN EMERGENCY PHYSICIAN BE AWARE OF THIS?: Given the rise in SC use and the potential effect on the cardiovascular system, physicians need to be mindful of potential cardiac complications, such as QTc prolongation and torsade de pointe, especially when administering medications that have the potential to cause QTc prolongation. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. System and method for controlling a vision guided robot assembly

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Yhu-Tin; Daro, Timothy; Abell, Jeffrey A.; Turner, III, Raymond D.; Casoli, Daniel J.

    2017-03-07

    A method includes the following steps: actuating a robotic arm to perform an action at a start position; moving the robotic arm from the start position toward a first position; determining from a vision process method if a first part from the first position will be ready to be subjected to a first action by the robotic arm once the robotic arm reaches the first position; commencing the execution of the visual processing method for determining the position deviation of the second part from the second position and the readiness of the second part to be subjected to a second action by the robotic arm once the robotic arm reaches the second position; and performing a first action on the first part using the robotic arm with the position deviation of the first part from the first position predetermined by the vision process method.

  2. A machine reading system for assembling synthetic paleontological databases.

    Directory of Open Access Journals (Sweden)

    Shanan E Peters

    Full Text Available Many aspects of macroevolutionary theory and our understanding of biotic responses to global environmental change derive from literature-based compilations of paleontological data. Existing manually assembled databases are, however, incomplete and difficult to assess and enhance with new data types. Here, we develop and validate the quality of a machine reading system, PaleoDeepDive, that automatically locates and extracts data from heterogeneous text, tables, and figures in publications. PaleoDeepDive performs comparably to humans in several complex data extraction and inference tasks and generates congruent synthetic results that describe the geological history of taxonomic diversity and genus-level rates of origination and extinction. Unlike traditional databases, PaleoDeepDive produces a probabilistic database that systematically improves as information is added. We show that the system can readily accommodate sophisticated data types, such as morphological data in biological illustrations and associated textual descriptions. Our machine reading approach to scientific data integration and synthesis brings within reach many questions that are currently underdetermined and does so in ways that may stimulate entirely new modes of inquiry.

  3. EyeScreen: A Vision-Based Desktop Interaction System

    OpenAIRE

    Xu, Yihua; Lv, Jingjun; Li, Shanqing; Jia, Yunde

    2007-01-01

    EyeScreen provides a natural HCI interface with vision-based hand tracking and gesture recognition techniques. Multi-view video images captured from two cameras facing a computer screen are used to track and recognize finger and hand motions. Finger tracking is achieved by skin color detection and particle filtering, and is greatly enhanced by the proposed screen background subtraction method that removes the screen images in advance. Finger click on the screen can also be detected from multi...

  4. Image Acquisition of Robust Vision Systems to Monitor Blurred Objects in Hazy Smoking Environments

    International Nuclear Information System (INIS)

    Ahn, Yongjin; Park, Seungkyu; Baik, Sunghoon; Kim, Donglyul; Nam, Sungmo; Jeong, Kyungmin

    2014-01-01

    Image information in disaster area or radiation area of nuclear industry is an important data for safety inspection and preparing appropriate damage control plans. So, robust vision system for structures and facilities in blurred smoking environments, such as the places of a fire and detonation, is essential in remote monitoring. Vision systems can't acquire an image when the illumination light is blocked by disturbance materials, such as smoke, fog, dust. The vision system based on wavefront correction can be applied to blurred imaging environments and the range-gated imaging system can be applied to both of blurred imaging and darken light environments. Wavefront control is a widely used technique to improve the performance of optical systems by actively correcting wavefront distortions, such as atmospheric turbulence, thermally-induced distortions, and laser or laser device aberrations, which can reduce the peak intensity and smear an acquired image. The principal applications of wavefront control are for improving the image quality in optical imaging systems such as infrared astronomical telescopes, in imaging and tracking rapidly moving space objects, and in compensating for laser beam distortion through the atmosphere. A conventional wavefront correction system consists of a wavefront sensor, a deformable mirror and a control computer. The control computer measures the wavefront distortions using a wavefront sensor and corrects it using a deformable mirror in a closed-loop. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra

  5. Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study

    International Nuclear Information System (INIS)

    Suenaga, Hideyuki; Tran, Huy Hoang; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi; Hoshi, Kazuto; Takato, Tsuyoshi

    2015-01-01

    This study evaluated the use of an augmented reality navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery. A feasibility study was performed on a subject, wherein a stereo camera was used for tracking and markerless registration. The computed tomography data obtained from the volunteer was used to create an integral videography image and a 3-dimensional rapid prototype model of the jaw. The overlay of the subject’s anatomic site and its 3D-IV image were displayed in real space using a 3D-AR display. Extraction of characteristic points and teeth matching were done using parallax images from two stereo cameras for patient-image registration. Accurate registration of the volunteer’s anatomy with IV stereoscopic images via image matching was done using the fully automated markerless system, which recognized the incisal edges of the teeth and captured information pertaining to their position with an average target registration error of < 1 mm. These 3D-CT images were then displayed in real space with high accuracy using AR. Even when the viewing position was changed, the 3D images could be observed as if they were floating in real space without using special glasses. Teeth were successfully used for registration via 3D image (contour) matching. This system, without using references or fiducial markers, displayed 3D-CT images in real space with high accuracy. The system provided real-time markerless registration and 3D image matching via stereo vision, which, combined with AR, could have significant clinical applications. The online version of this article (doi:10.1186/s12880-015-0089-5) contains supplementary material, which is available to authorized users

  6. Robotic vision system for random bin picking with dual-arm robots

    Directory of Open Access Journals (Sweden)

    Kang Sangseung

    2016-01-01

    Full Text Available Random bin picking is one of the most challenging industrial robotics applications available. It constitutes a complicated interaction between the vision system, robot, and control system. For a packaging operation requiring a pick-and-place task, the robot system utilized should be able to perform certain functions for recognizing the applicable target object from randomized objects in a bin. In this paper, we introduce a robotic vision system for bin picking using industrial dual-arm robots. The proposed system recognizes the best object from randomized target candidates based on stereo vision, and estimates the position and orientation of the object. It then sends the result to the robot control system. The system was developed for use in the packaging process of cell phone accessories using dual-arm robots.

  7. Hand gesture recognition system based in computer vision and machine learning

    OpenAIRE

    Trigueiros, Paulo; Ribeiro, António Fernando; Reis, L. P.

    2015-01-01

    "Lecture notes in computational vision and biomechanics series, ISSN 2212-9391, vol. 19" Hand gesture recognition is a natural way of human computer interaction and an area of very active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research applied to Hum...

  8. The ART of representation: Memory reduction and noise tolerance in a neural network vision system

    Science.gov (United States)

    Langley, Christopher S.

    The Feature Cerebellar Model Arithmetic Computer (FCMAC) is a multiple-input-single-output neural network that can provide three-degree-of-freedom (3-DOF) pose estimation for a robotic vision system. The FCMAC provides sufficient accuracy to enable a manipulator to grasp an object from an arbitrary pose within its workspace. The network learns an appearance-based representation of an object by storing coarsely quantized feature patterns. As all unique patterns are encoded, the network size grows uncontrollably. A new architecture is introduced herein, which combines the FCMAC with an Adaptive Resonance Theory (ART) network. The ART module categorizes patterns observed during training into a set of prototypes that are used to build the FCMAC. As a result, the network no longer grows without bound, but constrains itself to a user-specified size. Pose estimates remain accurate since the ART layer tends to discard the least relevant information first. The smaller network performs recall faster, and in some cases is better for generalization, resulting in a reduction of error at recall time. The ART-Under-Constraint (ART-C) algorithm is extended to include initial filling with randomly selected patterns (referred to as ART-F). In experiments using a real-world data set, the new network performed equally well using less than one tenth the number of coarse patterns as a regular FCMAC. The FCMAC is also extended to include real-valued input activations. As a result, the network can be tuned to reject a variety of types of noise in the image feature detection. A quantitative analysis of noise tolerance was performed using four synthetic noise algorithms, and a qualitative investigation was made using noisy real-world image data. In validation experiments, the FCMAC system outperformed Radial Basis Function (RBF) networks for the 3-DOF problem, and had accuracy comparable to that of Principal Component Analysis (PCA) and superior to that of Shape Context Matching (SCM), both

  9. Functional vision and cognition in infants with congenital disorders of the peripheral visual system.

    Science.gov (United States)

    Dale, Naomi; Sakkalou, Elena; O'Reilly, Michelle; Springall, Clare; De Haan, Michelle; Salt, Alison

    2017-07-01

    To investigate how vision relates to early development by studying vision and cognition in a national cohort of 1-year-old infants with congenital disorders of the peripheral visual system and visual impairment. This was a cross-sectional observational investigation of a nationally recruited cohort of infants with 'simple' and 'complex' congenital disorders of the peripheral visual system. Entry age was 8 to 16 months. Vision level (Near Detection Scale) and non-verbal cognition (sensorimotor understanding, Reynell Zinkin Scales) were assessed. Parents completed demographic questionnaires. Of 90 infants (49 males, 41 females; mean 13mo, standard deviation [SD] 2.5mo; range 7-17mo); 25 (28%) had profound visual impairment (light perception at best) and 65 (72%) had severe visual impairment (basic 'form' vision). The Near Detection Scale correlated significantly with sensorimotor understanding developmental quotients in the 'total', 'simple', and 'complex' groups (all pvisual impairment, especially in the 'complex' group with congenital disorders of the peripheral visual system with known brain involvement, showed the greatest cognitive delay. Lack of vision is associated with delayed early-object manipulative abilities and concepts; 'form' vision appeared to support early developmental advance. This paper provides baseline characteristics for cross-sectional and longitudinal follow-up investigations in progress. A methodological strength of the study was the representativeness of the cohort according to national epidemiological and population census data. © 2017 Mac Keith Press.

  10. A machine vision system with CCD cameras for patient positioning in radiotherapy: a preliminary report.

    Science.gov (United States)

    Yoshitake, Tadamasa; Nakamura, Katsumasa; Shioyama, Yoshiyuki; Sasaki, Tomonari; Ohga, Saiji; Yamaguchi, Toshihiro; Toba, Takashi; Anai, Shigeo; Terashima, Hiromi; Honda, Hiroshi

    2005-12-01

    To determine positioning accuracy of a machine vision system in radiotherapy. The machine vision system was composed of 640 x 480 pixel CCD cameras and computerized control systems. For image acquisition, the phantom was set up for the reference position and a single CCD camera was positioned 1.5 m from the isocenter. The image data of the fiducial marker with 1.5 mm lead pellet on the lateral surface of the phantom was captured onto the CCD, and then the position of the marker was accurately calculated. The phantom was moved 0.25, 0.50, 0.75, 1.00, 2.00, and 3.00 mm from the reference position, using a micrometer head. The position of the fiducial marker was analyzed using a kilo-voltage fluoroscopic imaging system and a machine vision system. Using fluoroscopic images, the discrepancy between the actual movement of the phantom by micrometer heads and the measurement was found to be 0.12 +/- 0.05 mm (mean +/- standard deviation). In contrast, the detection of the movement by the machine vision system coincided with the discrepancy of 0.0067 +/- 0.0048 mm. This study suggests that the machine vision system can be used to measure small changes in patient position with a resolution of less than 0.1 mm.

  11. Improvement of the image quality of a high-temperature vision system

    International Nuclear Information System (INIS)

    Fabijańska, Anna; Sankowski, Dominik

    2009-01-01

    In this paper, the issues of controlling and improving the image quality of a high-temperature vision system are considered. The image quality improvement is needed to measure the surface properties of metals and alloys. Two levels of image quality control and improvement are defined in the system. The first level in hardware aims at adjusting the system configuration to obtain the highest contrast and weakest aura images. When optimal configuration is obtained, the second level in software is applied. In this stage, image enhancement algorithms are applied which have been developed with consideration of distortions arising from the vision system components and specificity of images acquired during the measurement process. The developed algorithms have been applied in the vision system to images. The influence on the accuracy of wetting angles and surface tension determination are considered

  12. Improving Vision-Based Motor Rehabilitation Interactive Systems for Users with Disabilities Using Mirror Feedback

    Science.gov (United States)

    Martínez-Bueso, Pau; Moyà-Alcover, Biel

    2014-01-01

    Observation is recommended in motor rehabilitation. For this reason, the aim of this study was to experimentally test the feasibility and benefit of including mirror feedback in vision-based rehabilitation systems: we projected the user on the screen. We conducted a user study by using a previously evaluated system that improved the balance and postural control of adults with cerebral palsy. We used a within-subjects design with the two defined feedback conditions (mirror and no-mirror) with two different groups of users (8 with disabilities and 32 without disabilities) using usability measures (time-to-start (T s) and time-to-complete (T c)). A two-tailed paired samples t-test confirmed that in case of disabilities the mirror feedback facilitated the interaction in vision-based systems for rehabilitation. The measured times were significantly worse in the absence of the user's own visual feedback (T s = 7.09 (P < 0.001) and T c = 4.48 (P < 0.005)). In vision-based interaction systems, the input device is the user's own body; therefore, it makes sense that feedback should be related to the body of the user. In case of disabilities the mirror feedback mechanisms facilitated the interaction in vision-based systems for rehabilitation. Results recommends developers and researchers use this improvement in vision-based motor rehabilitation interactive systems. PMID:25295310

  13. Homework system development with the intention of supporting Saudi Arabia's vision 2030

    Science.gov (United States)

    Elgimari, Atifa; Alshahrani, Shafya; Al-shehri, Amal

    2017-10-01

    This paper suggests a web-based homework system. The suggested homework system can serve targeted students with ages of 7-11 years old. By using the suggested homework system, hard copies of homeworks were replaced by soft copies. Parents were involved in the education process electronically. It is expected to participate in applying Saudi Arabia's Vision 2030, specially in the education sector, where it considers the primary education is its foundation stone, as the success of the Vision depends in large assess on reforms in the education system generating a better basis for employment of young Saudis.

  14. Computational vision

    Science.gov (United States)

    Barrow, H. G.; Tenenbaum, J. M.

    1981-01-01

    The range of fundamental computational principles underlying human vision that equally apply to artificial and natural systems is surveyed. There emerges from research a view of the structuring of vision systems as a sequence of levels of representation, with the initial levels being primarily iconic (edges, regions, gradients) and the highest symbolic (surfaces, objects, scenes). Intermediate levels are constrained by information made available by preceding levels and information required by subsequent levels. In particular, it appears that physical and three-dimensional surface characteristics provide a critical transition from iconic to symbolic representations. A plausible vision system design incorporating these principles is outlined, and its key computational processes are elaborated.

  15. A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection

    Science.gov (United States)

    D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin

    1993-01-01

    A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...

  16. Synthetic Cyclic Peptomers as Type III Secretion System Inhibitors.

    Science.gov (United States)

    Lam, Hanh; Schwochert, Joshua; Lao, Yongtong; Lau, Tannia; Lloyd, Cameron; Luu, Justin; Kooner, Olivia; Morgan, Jessica; Lokey, Scott; Auerbuch, Victoria

    2017-09-01

    Antibiotic-resistant bacteria are an emerging threat to global public health. New classes of antibiotics and tools for antimicrobial discovery are urgently needed. Type III secretion systems (T3SS), which are required by dozens of Gram-negative bacteria for virulence but largely absent from nonpathogenic bacteria, are promising virulence blocker targets. The ability of mammalian cells to recognize the presence of a functional T3SS and trigger NF-κB activation provides a rapid and sensitive method for identifying chemical inhibitors of T3SS activity. In this study, we generated a HEK293 stable cell line expressing green fluorescent protein (GFP) driven by a promoter containing NF-κB enhancer elements to serve as a readout of T3SS function. We identified a family of synthetic cyclic peptide-peptoid hybrid molecules (peptomers) that exhibited dose-dependent inhibition of T3SS effector secretion in Yersinia pseudotuberculosis and Pseudomonas aeruginosa without affecting bacterial growth or motility. Among these inhibitors, EpD-3'N, EpD-1,2N, EpD-1,3'N, EpD-1,2,3'N, and EpD-1,2,4'N exhibited strong inhibitory effects on translocation of the Yersinia YopM effector protein into mammalian cells (>40% translocation inhibition at 7.5 μM) and showed no toxicity to mammalian cells at 240 μM. In addition, EpD-3'N and EpD-1,2,4'N reduced the rounding of HeLa cells caused by the activity of Yersinia effector proteins that target the actin cytoskeleton. In summary, we have discovered a family of novel cyclic peptomers that inhibit the injectisome T3SS but not the flagellar T3SS. Copyright © 2017 American Society for Microbiology.

  17. Geo synthetic-reinforced Pavement systems; Sistemas de pavimentos reforzados con geosinteticos

    Energy Technology Data Exchange (ETDEWEB)

    Zornberg, J. G.

    2014-02-01

    Geo synthetics have been used as reinforcement inclusions to improve pavement performance. while there are clear field evidence of the benefit of using geo synthetic reinforcements, the specific conditions or mechanisms that govern the reinforcement of pavements are, at best, unclear and have remained largely unmeasured. Significant research has been recently conducted with the objectives of: (i) determining the relevant properties of geo synthetics that contribute to the enhanced performance of pavement systems, (ii) developing appropriate analytical, laboratory and field methods capable of quantifying the pavement performance, and (iii) enabling the prediction of pavement performance as a function of the properties of the various types of geo synthetics. (Author)

  18. A concurrent on-board vision system for a mobile robot

    International Nuclear Information System (INIS)

    Jones, J.P.

    1988-01-01

    Robot vision algorithms have been implemented on an 8-node NCUBE-AT hypercube system onboard a mobile robot (HERMIES) developed at Oak Ridge National Laboratory. Images are digitized using a framegrabber mounted in a VME rack. Image processing and analysis are performed on the hypercube system. The vision system is integrated with robot navigation and control software, enabling the robot to find the front of a mockup control panel, move up to the panel, and read an analog meter. Among the concurrent algorithms used for image analysis are a new component labeling algorithm and a Hough transform algorithm with load balancing

  19. A computer vision system for the recognition of trees in aerial photographs

    Science.gov (United States)

    Pinz, Axel J.

    1991-01-01

    Increasing problems of forest damage in Central Europe set the demand for an appropriate forest damage assessment tool. The Vision Expert System (VES) is presented which is capable of finding trees in color infrared aerial photographs. Concept and architecture of VES are discussed briefly. The system is applied to a multisource test data set. The processing of this multisource data set leads to a multiple interpretation result for one scene. An integration of these results will provide a better scene description by the vision system. This is achieved by an implementation of Steven's correlation algorithm.

  20. Computer vision

    Science.gov (United States)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  1. Functional fusion of living systems with synthetic electrode interfaces.

    Science.gov (United States)

    Staufer, Oskar; Weber, Sebastian; Bengtson, C Peter; Bading, Hilmar; Spatz, Joachim P; Rustom, Amin

    2016-01-01

    The functional fusion of "living" biomaterial (such as cells) with synthetic systems has developed into a principal ambition for various scientific disciplines. In particular, emerging fields such as bionics and nanomedicine integrate advanced nanomaterials with biomolecules, cells and organisms in order to develop novel strategies for applications, including energy production or real-time diagnostics utilizing biomolecular machineries "perfected" during billion years of evolution. To date, hardware-wetware interfaces that sample or modulate bioelectric potentials, such as neuroprostheses or implantable energy harvesters, are mostly based on microelectrodes brought into the closest possible contact with the targeted cells. Recently, the possibility of using electrochemical gradients of the inner ear for technical applications was demonstrated using implanted electrodes, where 1.12 nW of electrical power was harvested from the guinea pig endocochlear potential for up to 5 h (Mercier, P.; Lysaght, A.; Bandyopadhyay, S.; Chandrakasan, A.; Stankovic, K. Nat. Biotech. 2012, 30, 1240-1243). More recent approaches employ nanowires (NWs) able to penetrate the cellular membrane and to record extra- and intracellular electrical signals, in some cases with subcellular resolution (Spira, M.; Hai, A. Nat. Nano. 2013, 8, 83-94). Such techniques include nanoelectric scaffolds containing free-standing silicon NWs (Robinson, J. T.; Jorgolli, M.; Shalek, A. K.; Yoon, M. H.; Gertner, R. S.; Park, H. Nat Nanotechnol. 2012, 10, 180-184) or NW field-effect transistors (Qing, Q.; Jiang, Z.; Xu, L.; Gao, R.; Mai, L.; Lieber, C. Nat. Nano. 2013, 9, 142-147), vertically aligned gallium phosphide NWs (Hällström, W.; Mårtensson, T.; Prinz, C.; Gustavsson, P.; Montelius, L.; Samuelson, L.; Kanje, M. Nano Lett. 2007, 7, 2960-2965) or individually contacted, electrically active carbon nanofibers. The latter of these approaches is capable of recording electrical responses from oxidative events

  2. Functional fusion of living systems with synthetic electrode interfaces

    Directory of Open Access Journals (Sweden)

    Oskar Staufer

    2016-02-01

    Full Text Available The functional fusion of “living” biomaterial (such as cells with synthetic systems has developed into a principal ambition for various scientific disciplines. In particular, emerging fields such as bionics and nanomedicine integrate advanced nanomaterials with biomolecules, cells and organisms in order to develop novel strategies for applications, including energy production or real-time diagnostics utilizing biomolecular machineries “perfected” during billion years of evolution. To date, hardware–wetware interfaces that sample or modulate bioelectric potentials, such as neuroprostheses or implantable energy harvesters, are mostly based on microelectrodes brought into the closest possible contact with the targeted cells. Recently, the possibility of using electrochemical gradients of the inner ear for technical applications was demonstrated using implanted electrodes, where 1.12 nW of electrical power was harvested from the guinea pig endocochlear potential for up to 5 h (Mercier, P.; Lysaght, A.; Bandyopadhyay, S.; Chandrakasan, A.; Stankovic, K. Nat. Biotech. 2012, 30, 1240–1243. More recent approaches employ nanowires (NWs able to penetrate the cellular membrane and to record extra- and intracellular electrical signals, in some cases with subcellular resolution (Spira, M.; Hai, A. Nat. Nano. 2013, 8, 83–94. Such techniques include nanoelectric scaffolds containing free-standing silicon NWs (Robinson, J. T.; Jorgolli, M.; Shalek, A. K.; Yoon, M. H.; Gertner, R. S.; Park, H. Nat Nanotechnol. 2012, 10, 180–184 or NW field-effect transistors (Qing, Q.; Jiang, Z.; Xu, L.; Gao, R.; Mai, L.; Lieber, C. Nat. Nano. 2013, 9, 142–147, vertically aligned gallium phosphide NWs (Hällström, W.; Mårtensson, T.; Prinz, C.; Gustavsson, P.; Montelius, L.; Samuelson, L.; Kanje, M. Nano Lett. 2007, 7, 2960–2965 or individually contacted, electrically active carbon nanofibers. The latter of these approaches is capable of recording

  3. Software model of a machine vision system based on the common house fly.

    Science.gov (United States)

    Madsen, Robert; Barrett, Steven; Wilcox, Michael

    2005-01-01

    The vision system of the common house fly has many properties, such as hyperacuity and parallel structure, which would be advantageous in a machine vision system. A software model has been developed which is ultimately intended to be a tool to guide the design of an analog real time vision system. The model starts by laying out cartridges over an image. The cartridges are analogous to the ommatidium of the fly's eye and contain seven photoreceptors each with a Gaussian profile. The spacing between photoreceptors is variable providing for more or less detail as needed. The cartridges provide information on what type of features they see and neighboring cartridges share information to construct a feature map.

  4. Using Weightless Neural Networks for Vergence Control in an Artificial Vision System

    Directory of Open Access Journals (Sweden)

    Karin S. Komati

    2003-01-01

    Full Text Available This paper presents a methodology we have developed and used to implement an artificial binocular vision system capable of emulating the vergence of eye movements. This methodology involves using weightless neural networks (WNNs as building blocks of artificial vision systems. Using the proposed methodology, we have designed several architectures of WNN-based artificial vision systems, in which images captured by virtual cameras are used for controlling the position of the ‘foveae’ of these cameras (high-resolution region of the images captured. Our best architecture is able to control the foveae vergence movements with average error of only 3.58 image pixels, which is equivalent to an angular error of approximately 0.629°.

  5. Health systems analysis of eye care services in Zambia: evaluating progress towards VISION 2020 goals.

    Science.gov (United States)

    Bozzani, Fiammetta Maria; Griffiths, Ulla Kou; Blanchet, Karl; Schmidt, Elena

    2014-02-28

    VISION 2020 is a global initiative launched in 1999 to eliminate avoidable blindness by 2020. The objective of this study was to undertake a situation analysis of the Zambian eye health system and assess VISION 2020 process indicators on human resources, equipment and infrastructure. All eye health care providers were surveyed to determine location, financing sources, human resources and equipment. Key informants were interviewed regarding levels of service provision, management and leadership in the sector. Policy papers were reviewed. A health system dynamics framework was used to analyse findings. During 2011, 74 facilities provided eye care in Zambia; 39% were public, 37% private for-profit and 24% owned by Non-Governmental Organizations. Private facilities were solely located in major cities. A total of 191 people worked in eye care; 18 of these were ophthalmologists and eight cataract surgeons, equivalent to 0.34 and 0.15 per 250,000 population, respectively. VISION 2020 targets for inpatient beds and surgical theatres were met in six out of nine provinces, but human resources and spectacles manufacturing workshops were below target in every province. Inequalities in service provision between urban and rural areas were substantial. Shortage and maldistribution of human resources, lack of routine monitoring and inadequate financing mechanisms are the root causes of underperformance in the Zambian eye health system, which hinder the ability to achieve the VISION 2020 goals. We recommend that all VISION 2020 process indicators are evaluated simultaneously as these are not individually useful for monitoring progress.

  6. A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Abdul Rahman Hafiz

    2011-01-01

    Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.

  7. The research of binocular vision ranging system based on LabVIEW

    Science.gov (United States)

    Li, Shikuan; Yang, Xu

    2017-10-01

    Based on the study of the principle of binocular parallax ranging, a binocular vision ranging system is designed and built. The stereo matching algorithm is realized by LabVIEW software. The camera calibration and distance measurement are completed. The error analysis shows that the system fast, effective, can be used in the corresponding industrial occasions.

  8. [Development of a new position-recognition system for robotic radiosurgery systems using machine vision].

    Science.gov (United States)

    Mohri, Issai; Umezu, Yoshiyuki; Fukunaga, Junnichi; Tane, Hiroyuki; Nagata, Hironori; Hirashima, Hideaki; Nakamura, Katsumasa; Hirata, Hideki

    2014-08-01

    CyberKnife(®) provides continuous guidance through radiography, allowing instantaneous X-ray images to be obtained; it is also equipped with 6D adjustment for patient setup. Its disadvantage is that registration is carried out just before irradiation, making it impossible to perform stereo-radiography during irradiation. In addition, patient movement cannot be detected during irradiation. In this study, we describe a new registration system that we term "Machine Vision," which subjects the patient to no additional radiation exposure for registration purposes, can be set up promptly, and allows real-time registration during irradiation. Our technique offers distinct advantages over CyberKnife by enabling a safer and more precise mode of treatment. "Machine Vision," which we have designed and fabricated, is an automatic registration system that employs three charge coupled device cameras oriented in different directions that allow us to obtain a characteristic depiction of the shape of both sides of the fetal fissure and external ears in a human head phantom. We examined the degree of precision of this registration system and concluded it to be suitable as an alternative method of registration without radiation exposure when displacement is less than 1.0 mm in radiotherapy. It has potential for application to CyberKnife in clinical treatment.

  9. A real time tracking vision system and its application to robotics

    International Nuclear Information System (INIS)

    Inoue, Hirochika

    1994-01-01

    Among various sensing channels the vision is most important for making robot intelligent. If provided with a high speed visual tracking capability, the robot-environment interaction becomes dynamic instead of static, and thus the potential repertoire of robot behavior becomes very rich. For this purpose we developed a real-time tracking vision system. The fundamental operation on which our system based is the calculation of correlation between local images. Use of special chip for correlation and the multi-processor configuration enable the robot to track more than hundreds cues in full video rate. In addition to the fundamental visual performance, applications for robot behavior control are also introduced. (author)

  10. Implementation of Tissue Harmonic Synthetic Aperture Imaging on a Commercial Ultrasound System

    DEFF Research Database (Denmark)

    Rasmussen, Joachim; Hemmsen, Martin Christian; Madsen, Signe Sloth

    2012-01-01

    This paper presents an imaging technique for synthetic aperture (SAI) tissue harmonic imaging (THI) on a commercial ultrasound system. Synthetic aperture sequential beamforming (SASB) is combined with a pulse inversion (PI) technique on a commercial BK 2202 UltraView system. An interleaved scan...... implementation of SASB-THI was achieved on a commercial system, which can be used for future pre-clinical trials....

  11. Prediction of pork color attributes using computer vision system.

    Science.gov (United States)

    Sun, Xin; Young, Jennifer; Liu, Jeng Hung; Bachmeier, Laura; Somers, Rose Marie; Chen, Kun Jie; Newman, David

    2016-03-01

    Color image processing and regression methods were utilized to evaluate color score of pork center cut loin samples. One hundred loin samples of subjective color scores 1 to 5 (NPB, 2011; n=20 for each color score) were selected to determine correlation values between Minolta colorimeter measurements and image processing features. Eighteen image color features were extracted from three different RGB (red, green, blue) model, HSI (hue, saturation, intensity) and L*a*b* color spaces. When comparing Minolta colorimeter values with those obtained from image processing, correlations were significant (P<0.0001) for L* (0.91), a* (0.80), and b* (0.66). Two comparable regression models (linear and stepwise) were used to evaluate prediction results of pork color attributes. The proposed linear regression model had a coefficient of determination (R(2)) of 0.83 compared to the stepwise regression results (R(2)=0.70). These results indicate that computer vision methods have potential to be used as a tool in predicting pork color attributes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. An assembly system based on industrial robot with binocular stereo vision

    Science.gov (United States)

    Tang, Hong; Xiao, Nanfeng

    2017-01-01

    This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.

  13. Data-Fusion for a Vision-Aided Radiological Detection System: Sensor dependence and Source Tracking

    Science.gov (United States)

    Stadnikia, Kelsey; Martin, Allan; Henderson, Kristofer; Koppal, Sanjeev; Enqvist, Andreas

    2018-01-01

    The University of Florida is taking a multidisciplinary approach to fuse the data between 3D vision sensors and radiological sensors in hopes of creating a system capable of not only detecting the presence of a radiological threat, but also tracking it. The key to developing such a vision-aided radiological detection system, lies in the count rate being inversely dependent on the square of the distance. Presented in this paper are the results of the calibration algorithm used to predict the location of the radiological detectors based on 3D distance from the source to the detector (vision data) and the detectors count rate (radiological data). Also presented are the results of two correlation methods used to explore source tracking.

  14. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-03-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real-time for the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both the algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: eL* = 5.001%, and ea* = 2.287%, and eb* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  15. Self-positioning of a mobile robot using a vision system and image overlay with VRML

    Science.gov (United States)

    Kwon, Bang Hyun; Son, Eun Ho; Yoo, Sung Goo; Chong, Kil To

    2005-12-01

    The research described a method for localizing a mobile robot in the working environment using a vision system and VRML. The robot identifies landmarks in the environment and carries out the self-positioning. The image-processing and neural network pattern matching techniques were employed to recognize landmarks placed in a robot working environment. The robot self-positioning using vision system was based on the well-known localization algorithm. After self-positioning, the 2D scene of the vision is overlaid with the VRML scene. How to realize the self-positioning was described. Also the result of overlapping between the 2D scene and VRML scene was shown. In addition, the advantage expected from overlapping both scenes described.

  16. A bio-inspired apposition compound eye machine vision sensor system

    International Nuclear Information System (INIS)

    Davis, J D; Barrett, S F; Wright, C H G; Wilcox, M

    2009-01-01

    The Wyoming Information, Signal Processing, and Robotics Laboratory is developing a wide variety of bio-inspired vision sensors. We are interested in exploring the vision system of various insects and adapting some of their features toward the development of specialized vision sensors. We do not attempt to supplant traditional digital imaging techniques but rather develop sensor systems tailor made for the application at hand. We envision that many applications may require a hybrid approach using conventional digital imaging techniques enhanced with bio-inspired analogue sensors. In this specific project, we investigated the apposition compound eye and its characteristics commonly found in diurnal insects and certain species of arthropods. We developed and characterized an array of apposition compound eye-type sensors and tested them on an autonomous robotic vehicle. The robot exhibits the ability to follow a pre-defined target and avoid specified obstacles using a simple control algorithm.

  17. Computer vision system in real-time for color determination on flat surface food

    Directory of Open Access Journals (Sweden)

    Erick Saldaña

    2013-01-01

    Full Text Available Artificial vision systems also known as computer vision are potent quality inspection tools, which can be applied in pattern recognition for fruits and vegetables analysis. The aim of this research was to design, implement and calibrate a new computer vision system (CVS in real - time f or the color measurement on flat surface food. For this purpose was designed and implemented a device capable of performing this task (software and hardware, which consisted of two phases: a image acquisition and b image processing and analysis. Both th e algorithm and the graphical interface (GUI were developed in Matlab. The CVS calibration was performed using a conventional colorimeter (Model CIEL* a* b*, where were estimated the errors of the color parameters: e L* = 5.001%, and e a* = 2.287%, and e b* = 4.314 % which ensure adequate and efficient automation application in industrial processes in the quality control in the food industry sector.

  18. Development of a Compact Range-gated Vision System to Monitor Structures in Low-visibility Environments

    International Nuclear Information System (INIS)

    Ahn, Yong-Jin; Park, Seung-Kyu; Baik, Sung-Hoon; Kim, Dong-Lyul; Choi, Young-Soo; Jeong, Kyung-Min

    2015-01-01

    Image acquisition in disaster area or radiation area of nuclear industry is an important function for safety inspection and preparing appropriate damage control plans. So, automatic vision system to monitor structures and facilities in blurred smoking environments such as the places of a fire and detonation is essential. Vision systems can't acquire an image when the illumination light is blocked by disturbance materials, such as smoke, fog and dust. To overcome the imaging distortion caused by obstacle materials, robust vision systems should have extra-functions, such as active illumination through disturbance materials. One of active vision system is a range-gated imaging system. The vision system based on the range-gated imaging system can acquire image data from the blurred and darken light environments. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and range image data is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra-short exposure time to only get the illumination light. Here, the illuminant illuminates objects by flashing strong light through disturbance materials, such as smoke particles and dust particles. In contrast to passive conventional vision systems, the RGI active vision technology enables operation even in harsh environments like low-visibility smoky environment. In this paper, a compact range-gated vision system is developed to monitor structures in low-visibility environment. The system consists of illumination light, a range-gating camera and a control computer. Visualization experiments are carried out in low-visibility foggy environment to see imaging capability

  19. Head movements quadruple the range of speeds encoded by the insect motion vision system in hawkmoths.

    Science.gov (United States)

    Windsor, Shane P; Taylor, Graham K

    2017-10-11

    Flying insects use compensatory head movements to stabilize gaze. Like other optokinetic responses, these movements can reduce image displacement, motion and misalignment, and simplify the optic flow field. Because gaze is imperfectly stabilized in insects, we hypothesized that compensatory head movements serve to extend the range of velocities of self-motion that the visual system encodes. We tested this by measuring head movements in hawkmoths Hyles lineata responding to full-field visual stimuli of differing oscillation amplitudes, oscillation frequencies and spatial frequencies. We used frequency-domain system identification techniques to characterize the head's roll response, and simulated how this would have affected the output of the motion vision system, modelled as a computational array of Reichardt detectors. The moths' head movements were modulated to allow encoding of both fast and slow self-motion, effectively quadrupling the working range of the visual system for flight control. By using its own output to drive compensatory head movements, the motion vision system thereby works as an adaptive sensor, which will be especially beneficial in nocturnal species with inherently slow vision. Studies of the ecology of motion vision must therefore consider the tuning of motion-sensitive interneurons in the context of the closed-loop systems in which they function. © 2017 The Author(s).

  20. Physical Characterization of Synthetic Phosphatidylinositol Dimannosides and Analogues in Binary Systems with Phosphatidylcholine

    DEFF Research Database (Denmark)

    Hubert, Madlen; Larsen, David S; Hayman, Colin M

    2014-01-01

    Native phosphatidylinositol mannosides (PIMs) from the cell wall of Mycobacterium bovis (M. bovis) and synthetic analogues have been identified to exert immunostimulatory activities. These activities have been investigated using particulate delivery systems containing native mannosylated lipids o...

  1. SMART-DS: Synthetic Models for Advanced, Realistic Testing: Distribution Systems and Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Hodge, Bri-Mathias; Palmintier, Bryan

    2016-03-03

    This presentation provides an overview of full-scale, high-quality, synthetic distribution system data set(s) for testing distribution automation algorithms, distributed control approaches, ADMS capabilities, and other emerging distribution technologies.

  2. Development of PHilMech Computer Vision System (CVS) for Quality Analysis of Rice and Corn

    OpenAIRE

    Andres Morales Tuates jr; Aileen R. Ligisan

    2016-01-01

    Manual analysis of rice and corn is done by visually inspecting each grain and classifying according to their respective categories.  This method is subjective and tedious leading to errors in analysis.  Computer vision could be used to analyze quality of rice and corn by developing models that correlate shape and color features with various classification. The PhilMech low-cost computer vision system (CVS) was developed to analyze the quality of rice and corn.  It is composed of an ordinary ...

  3. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras.

    Science.gov (United States)

    Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A

    2017-07-25

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  4. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras

    Directory of Open Access Journals (Sweden)

    Mark Kenneth Quinn

    2017-07-01

    Full Text Available Measurements of pressure-sensitive paint (PSP have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  5. THE SYSTEM OF TECHNICAL VISION IN THE ARCHITECTURE OF THE REMOTE CONTROL SYSTEM

    Directory of Open Access Journals (Sweden)

    S. V. Shavetov

    2014-03-01

    Full Text Available The paper deals with the development of video broadcasting system in view of controlling mobile robots over the Internet. A brief overview of the issues and their solutions, encountered in the real-time broadcasting video stream, is given. Affordable and versatile solutions of technical vision are considered. An approach for frame-accurate video rebroadcasting to unlimited number of end-users is proposed. The optimal performance parameters of network equipment for the final number of cameras are defined. System approbation on five IP cameras of different manufacturers is done. The average time delay for broadcasting in MJPEG format over the local network was 200 ms and 500 ms over the Internet.

  6. Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming

    Science.gov (United States)

    Philip A. Araman

    1990-01-01

    This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...

  7. Distance based control system for machine vision-based selective spraying

    NARCIS (Netherlands)

    Steward, B.L.; Tian, L.F.; Tang, L.

    2002-01-01

    For effective operation of a selective sprayer with real-time local weed sensing, herbicides must be delivered, accurately to weed targets in the field. With a machine vision-based selective spraying system, acquiring sequential images and switching nozzles on and off at the correct locations are

  8. Performance of Color Camera Machine Vision in Automated Furniture Rough Mill Systems

    Science.gov (United States)

    D. Earl Kline; Agus Widoyoko; Janice K. Wiedenbeck; Philip A. Araman

    1998-01-01

    The objective of this study was to evaluate the performance of color camera machine vision for lumber processing in a furniture rough mill. The study used 134 red oak boards to compare the performance of automated gang-rip-first rough mill yield based on a prototype color camera lumber inspection system developed at Virginia Tech with both estimated optimum rough mill...

  9. Vision-based robotic system for object agnostic placing operations

    DEFF Research Database (Denmark)

    Rofalis, Nikolaos; Nalpantidis, Lazaros; Andersen, Nils Axel

    2016-01-01

    to operate within an unknown environment manipulating unknown objects. The developed system detects objects, finds matching compartments in a placing box, and ultimately grasps and places the objects there. The developed system exploits 3D sensing and visual feature extraction. No prior knowledge is provided...... to the system, neither for the objects nor for the placing box. The experimental evaluation of the developed robotic system shows that a combination of seemingly simple modules and strategies can provide effective solution to the targeted problem....

  10. Preliminary Design of a Recognition System for Infected Fish Species Using Computer Vision

    OpenAIRE

    Hu, Jing; Li, Daoliang; Duan, Qingling; Chen, Guifen; Si, Xiuli

    2011-01-01

    Part 1: Decision Support Systems, Intelligent Systems and Artificial Intelligence Applications; International audience; For the purpose of classification of fish species, a recognition system was preliminary designed using computer vision. In the first place, pictures were pre-processed by developed programs, dividing into rectangle pieces. Secondly, color and texture features are extracted for those selected texture rectangle fish skin images. Finally, all the images were classified by multi...

  11. A Vision-based Steering Control System for Aerial Vehicles

    OpenAIRE

    Viollet, Stephane; Kerhuel, Lubin; Franceschini, Nicolas

    2009-01-01

    Here we have described how a miniature tethered aerial platform equipped with a one-axis, ultrafast accurate gaze control system inspired by highly proficient, long existing natural biological systems was designed and implemented. The seemingly complex gaze control system (figure 6) was designed to hold the robot's gaze fixated onto a contrasting object in spite of any major disturbances undergone by the body. It was established that after being destabilized by a nasty thump applied to its bo...

  12. A vision-based system for fast and accurate laser scanning in robot-assisted phonomicrosurgery.

    Science.gov (United States)

    Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G

    2015-02-01

    Surgical quality in phonomicrosurgery can be improved by open-loop laser control (e.g., high-speed scanning capabilities) with a robust and accurate closed-loop visual servoing systems. A new vision-based system for laser scanning control during robot-assisted phonomicrosurgery was developed and tested. Laser scanning was accomplished with a dual control strategy, which adds a vision-based trajectory correction phase to a fast open-loop laser controller. The system is designed to eliminate open-loop aiming errors caused by system calibration limitations and by the unpredictable topology of real targets. Evaluation of the new system was performed using CO(2) laser cutting trials on artificial targets and ex-vivo tissue. This system produced accuracy values corresponding to pixel resolution even when smoke created by the laser-target interaction clutters the camera view. In realistic test scenarios, trajectory following RMS errors were reduced by almost 80 % with respect to open-loop system performances, reaching mean error values around 30 μ m and maximum observed errors in the order of 60 μ m. A new vision-based laser microsurgical control system was shown to be effective and promising with significant positive potential impact on the safety and quality of laser microsurgeries.

  13. Distributed Electrical Energy Systems: Needs, Concepts, Approaches and Vision

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yingchen [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhang, Jun [University of Denver; Gao, Wenzhong [University of Denver; Zheng, Xinhu [University of Minnesota; Yang, Liuqing [Colorado State University; Hao, Jun [University of Denver; Dai, Xiaoxiao [University of Denver

    2017-09-01

    Intelligent distributed electrical energy systems (IDEES) are featured by vast system components, diversifled component types, and difficulties in operation and management, which results in that the traditional centralized power system management approach no longer flts the operation. Thus, it is believed that the blockchain technology is one of the important feasible technical paths for building future large-scale distributed electrical energy systems. An IDEES is inherently with both social and technical characteristics, as a result, a distributed electrical energy system needs to be divided into multiple layers, and at each layer, a blockchain is utilized to model and manage its logic and physical functionalities. The blockchains at difierent layers coordinate with each other and achieve successful operation of the IDEES. Speciflcally, the multi-layer blockchains, named 'blockchain group', consist of distributed data access and service blockchain, intelligent property management blockchain, power system analysis blockchain, intelligent contract operation blockchain, and intelligent electricity trading blockchain. It is expected that the blockchain group can be self-organized into a complex, autonomous and distributed IDEES. In this complex system, frequent and in-depth interactions and computing will derive intelligence, and it is expected that such intelligence can bring stable, reliable and efficient electrical energy production, transmission and consumption.

  14. Omnidirectional vision systems calibration, feature extraction and 3D information

    CERN Document Server

    Puig, Luis

    2013-01-01

    This work focuses on central catadioptric systems, from the early step of calibration to high-level tasks such as 3D information retrieval. The book opens with a thorough introduction to the sphere camera model, along with an analysis of the relation between this model and actual central catadioptric systems. Then, a new approach to calibrate any single-viewpoint catadioptric camera is described.  This is followed by an analysis of existing methods for calibrating central omnivision systems, and a detailed examination of hybrid two-view relations that combine images acquired with uncalibrated

  15. Automatic behaviour analysis system for honeybees using computer vision

    DEFF Research Database (Denmark)

    Tu, Gang Jun; Hansen, Mikkel Kragh; Kryger, Per

    2016-01-01

    -cost embedded computer with very limited computational resources as compared to an ordinary PC. The system succeeds in counting honeybees, identifying their position and measuring their in-and-out activity. Our algorithm uses background subtraction method to segment the images. After the segmentation stage...... demonstrate that this system can be used as a tool to detect the behaviour of honeybees and assess their state in the beehive entrance. Besides, the result of the computation time show that the Raspberry Pi is a viable solution in such real-time video processing system....

  16. Utilization of the Space Vision System as an Augmented Reality System For Mission Operations

    Science.gov (United States)

    Maida, James C.; Bowen, Charles

    2003-01-01

    Augmented reality is a technique whereby computer generated images are superimposed on live images for visual enhancement. Augmented reality can also be characterized as dynamic overlays when computer generated images are registered with moving objects in a live image. This technique has been successfully implemented, with low to medium levels of registration precision, in an NRA funded project entitled, "Improving Human Task Performance with Luminance Images and Dynamic Overlays". Future research is already being planned to also utilize a laboratory-based system where more extensive subject testing can be performed. However successful this might be, the problem will still be whether such a technology can be used with flight hardware. To answer this question, the Canadian Space Vision System (SVS) will be tested as an augmented reality system capable of improving human performance where the operation requires indirect viewing. This system has already been certified for flight and is currently flown on each shuttle mission for station assembly. Successful development and utilization of this system in a ground-based experiment will expand its utilization for on-orbit mission operations. Current research and development regarding the use of augmented reality technology is being simulated using ground-based equipment. This is an appropriate approach for development of symbology (graphics and annotation) optimal for human performance and for development of optimal image registration techniques. It is anticipated that this technology will become more pervasive as it matures. Because we know what and where almost everything is on ISS, this reduces the registration problem and improves the computer model of that reality, making augmented reality an attractive tool, provided we know how to use it. This is the basis for current research in this area. However, there is a missing element to this process. It is the link from this research to the current ISS video system and to

  17. Applications of industrial machine vision systems in the nuclear energy

    International Nuclear Information System (INIS)

    Vandergheynst, A.; Vanderborck, Y.

    1984-01-01

    In the paper, two multi-functional machine systems basically developed for the industrial robotics and representing the state of the art are presented. Their potential applications in the nuclear industry (nuclear power plants and fuel cycle facilities) are reviewed

  18. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    Science.gov (United States)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  19. Data Fusion for a Vision-Radiological System: a Statistical Calibration Algorithm

    International Nuclear Information System (INIS)

    Enqvist, Andreas; Koppal, Sanjeev; Riley, Phillip

    2015-01-01

    Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development of calibration algorithms for characterizing the fused sensor system as a single entity. There is an apparent need for correcting for a scene deviation from the basic inverse distance-squared law governing the detection rates even when evaluating system calibration algorithms. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked, to which the time-dependent radiological data can be incorporated by means of data fusion of the two sensors' output data. (authors)

  20. Systems-synthetic biology in understanding the complexities and simple devices in immunology.

    Science.gov (United States)

    Soni, Bhavnita; Nimsarkar, Prajakta; Mol, Milsee; Saha, Bhaskar; Singh, Shailza

    2018-03-23

    Systems and synthetic biology in the coming era has the ability to manipulate, stimulate and engineer cells to counteract the pathogenic immune response. The inherent biological complexities associated with the creation of a device allow capitalizing the biotechnological resources either by simply administering a recombinant cytokine or just reprogramming the immune cells. The strategy outlined, adopted and discussed may mark the beginning with promising therapeutics based on the principles of synthetic immunology. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Virtual vision system with actual flavor by olfactory display

    Science.gov (United States)

    Sakamoto, Kunio; Kanazawa, Fumihiro

    2010-11-01

    The authors have researched multimedia system and support system for nursing studies on and practices of reminiscence therapy and life review therapy. The concept of the life review is presented by Butler in 1963. The process of thinking back on one's life and communicating about one's life to another person is called life review. There is a famous episode concerning the memory. It is called as Proustian effects. This effect is mentioned on the Proust's novel as an episode that a story teller reminds his old memory when he dipped a madeleine in tea. So many scientists research why smells trigger the memory. The authors pay attention to the relation between smells and memory although the reason is not evident yet. Then we have tried to add an olfactory display to the multimedia system so that the smells become a trigger of reminding buried memories. An olfactory display is a device that delivers smells to the nose. It provides us with special effects, for example to emit smell as if you were there or to give a trigger for reminding us of memories. The authors have developed a tabletop display system connected with the olfactory display. For delivering a flavor to user's nose, the system needs to recognition and measure positions of user's face and nose. In this paper, the authors describe an olfactory display which enables to detect the nose position for an effective delivery.

  2. Remote sensing of physiological signs using a machine vision system.

    Science.gov (United States)

    Al-Naji, Ali; Gibson, Kim; Chahl, Javaan

    2017-07-01

    The aim of this work is to remotely measure heart rate (HR) and respiratory rate (RR) using a video camera from long range (> 50 m). The proposed system is based on imperceptible signals produced from blood circulation, including skin colour variations and head motion. As these signals are not visible to the naked eye and to preserve the signal strength in the video, we used an improved video magnification technique to enhance these invisible signals and detect the physiological activity within the subject. The software of the proposed system was built in a graphic user interface (GUI) environment to easily select a magnification system to use (colour or motion magnification) and measure the physiological signs independently. The measurements were performed on a set of 10 healthy subjects equipped with a finger pulse oximeter and respiratory belt transducer that were used as reference methods. The experimental results were statistically analysed by using the Bland-Altman method, Pearson's correlation coefficient, Spearman correlation coefficient, mean absolute error, and root mean squared error. The proposed system achieved high correlation even in the presence of movement artefacts, different skin tones, lighting conditions and distance from the camera. With acceptable performance and low computational complexity, the proposed system is a suitable candidate for homecare applications, security applications and mobile health devices.

  3. An Adaptive Machine Vision System for Parts Assembly Inspection

    Science.gov (United States)

    Sun, Jun; Sun, Qiao; Surgenor, Brian

    This paper presents an intelligent visual inspection methodology that addresses the need for an improved adaptability of a visual inspection system for parts verification in assembly lines. The proposed system is able to adapt to changing inspection tasks and environmental conditions through an efficient online learning process without excessive off-line retraining or retuning. The system consists of three major modules: region localization, defect detection, and online learning. An edge-based geometric pattern-matching technique is used to locate the region of verification that contains the subject of inspection within the acquired image. Principal component analysis technique is employed to implement the online learning and defect detection modules. Case studies using field data from a fasteners assembly line are conducted to validate the proposed methodology.

  4. Enhanced 3D face processing using an active vision system

    DEFF Research Database (Denmark)

    Lidegaard, Morten; Larsen, Rasmus; Kraft, Dirk

    2014-01-01

    of the narrow FOV camera. We substantiate these two observations by qualitative results on face reconstruction and quantitative results on face recognition. As a consequence, such a set-up allows to achieve better and much more flexible system for 3D face reconstruction e.g. for recognition or emotion......We present an active face processing system based on 3D shape information extracted by means of stereo information. We use two sets of stereo cameras with different field of views (FOV): One with a wide FOV is used for face tracking, while the other with a narrow FOV is used for face identification....... We argue for two advantages of such a system: First, an extended work range, and second, the possibility to place the narrow FOV camera in a way such that a much better reconstruction quality can be achieved compared to a static camera even if the face had been fully visible in the periphery...

  5. Development of machine-vision system for gap inspection of muskmelon grafted seedlings.

    Science.gov (United States)

    Liu, Siyao; Xing, Zuochang; Wang, Zifan; Tian, Subo; Jahun, Falalu Rabiu

    2017-01-01

    Grafting robots have been developed in the world, but some auxiliary works such as gap-inspecting for grafted seedlings still need to be done by human. An machine-vision system of gap inspection for grafted muskmelon seedlings was developed in this study. The image acquiring system consists of a CCD camera, a lens and a front white lighting source. The image of inspected gap was processed and analyzed by software of HALCON 12.0. The recognition algorithm for the system is based on principle of deformable template matching. A template should be created from an image of qualified grafted seedling gap. Then the gap image of the grafted seedling will be compared with the created template to determine their matching degree. Based on the similarity between the gap image of grafted seedling and the template, the matching degree will be 0 to 1. The less similar for the grafted seedling gap with the template the smaller of matching degree. Thirdly, the gap will be output as qualified or unqualified. If the matching degree of grafted seedling gap and the template is less than 0.58, or there is no match is found, the gap will be judged as unqualified; otherwise the gap will be qualified. Finally, 100 muskmelon seedlings were grafted and inspected to test the gap inspection system. Results showed that the gap inspection machine-vision system could recognize the gap qualification correctly as 98% of human vision. And the inspection speed of this system can reach 15 seedlings·min-1. The gap inspection process in grafting can be fully automated with this developed machine-vision system, and the gap inspection system will be a key step of a fully-automatic grafting robots.

  6. Development of machine-vision system for gap inspection of muskmelon grafted seedlings.

    Directory of Open Access Journals (Sweden)

    Siyao Liu

    Full Text Available Grafting robots have been developed in the world, but some auxiliary works such as gap-inspecting for grafted seedlings still need to be done by human. An machine-vision system of gap inspection for grafted muskmelon seedlings was developed in this study. The image acquiring system consists of a CCD camera, a lens and a front white lighting source. The image of inspected gap was processed and analyzed by software of HALCON 12.0. The recognition algorithm for the system is based on principle of deformable template matching. A template should be created from an image of qualified grafted seedling gap. Then the gap image of the grafted seedling will be compared with the created template to determine their matching degree. Based on the similarity between the gap image of grafted seedling and the template, the matching degree will be 0 to 1. The less similar for the grafted seedling gap with the template the smaller of matching degree. Thirdly, the gap will be output as qualified or unqualified. If the matching degree of grafted seedling gap and the template is less than 0.58, or there is no match is found, the gap will be judged as unqualified; otherwise the gap will be qualified. Finally, 100 muskmelon seedlings were grafted and inspected to test the gap inspection system. Results showed that the gap inspection machine-vision system could recognize the gap qualification correctly as 98% of human vision. And the inspection speed of this system can reach 15 seedlings·min-1. The gap inspection process in grafting can be fully automated with this developed machine-vision system, and the gap inspection system will be a key step of a fully-automatic grafting robots.

  7. A Multiple Sensor Machine Vision System Technology for the Hardwood

    Science.gov (United States)

    Richard W. Conners; D.Earl Kline; Philip A. Araman

    1995-01-01

    For the last few years the authors have been extolling the virtues of a multiple sensor approach to hardwood defect detection. Since 1989 the authors have actively been trying to develop such a system. This paper details some of the successes and failures that have been experienced to date. It also discusses what remains to be done and gives time lines for the...

  8. CATEGORIZATION OF EXTRANEOUS MATTER IN COTTON USING MACHINE VISION SYSTEMS

    Science.gov (United States)

    The Cotton Trash Identification System (CTIS) was developed at the Southwestern Cotton Ginning Research Laboratory to identify and categorize extraneous matter in cotton. The CTIS bark/grass categorization was evaluated with USDA-Agricultural Marketing Service (AMS) extraneous matter calls assigned ...

  9. An Evaluation of the VISION Execution System Demonstration Prototypes

    Science.gov (United States)

    1991-01-01

    254731 ý-ELECTE "An valuation of the VISONExecution System Demonstration Prototypes Patricia M-. Boren, Karen E. Isaacson, Judith E. Payne , Marc...Isaacson, Judith E. Payne , Marc L. Robbins, Robert S. Tripp Prepared for the United States Army A co,".I, For RAND? Approved for. public relase...manuscript. Jeffrey Crisci and Cecilia Butler, formerly of the Army Materiel Command (AMC) and currently with the Strategic Logistics Agency (SLA), were

  10. The optimized PWM driving for the lighting system based on physiological characteristic of human vision

    Science.gov (United States)

    Wang, Ping-Chieh; Uang, Chii-Maw; Hong, Yi-Jian; Ho, Zu-Sheng

    2011-10-01

    Saving energy, White-light LED plays a main role in solid state lighting system. Find the best energy saving driven solution is the engineer endless hard work. Besides DC and AC driving, LED using Pulse Width Modulation (PWM) operation is also a valuable research topic. The most important issue for this work is to find the drive frequency and duty for achieving both energy saving and better feeling on the human vision sensation. In this paper, psychophysics of human vision response to the lighting effect, including Persistence of vision, Bloch's Law, Broca-Sulzer Law, Ferry-Porter Law, Talbot-Plateau Law, and Contrast Sensitivity, will be discussed and analyzed. From the human vision system, we found that there are three factors: the flash sensitivity, the illumination intensity and the background environment illumination, that are used to decide the frequency and duty of the PWM driving method. A set of controllable LED lamps with adjustable frequency and duty is fitted inside a non-closed box is constructed for this experiment. When the background environment illumination intensity is high, the variation of the flash sensitivity and illumination intensity is not easy to observe. Increasing PWM frequency will eliminate flash sensitivity. When the duty is over 70%, the vision sensitivity is saturated. For warning purpose, the better frequency range is between 7Hz to 15Hz and the duty cycle can be lower down to 70%. For general lighting, the better frequency range is between 200Hz to 1000Hz and the duty cycle can also be lower down to 70%.

  11. Intelligent Machine Vision Based Modeling and Positioning System in Sand Casting Process

    Directory of Open Access Journals (Sweden)

    Shahid Ikramullah Butt

    2017-01-01

    Full Text Available Advanced vision solutions enable manufacturers in the technology sector to reconcile both competitive and regulatory concerns and address the need for immaculate fault detection and quality assurance. The modern manufacturing has completely shifted from the manual inspections to the machine assisted vision inspection methodology. Furthermore, the research outcomes in industrial automation have revolutionized the whole product development strategy. The purpose of this research paper is to introduce a new scheme of automation in the sand casting process by means of machine vision based technology for mold positioning. Automation has been achieved by developing a novel system in which casting molds of different sizes, having different pouring cup location and radius, position themselves in front of the induction furnace such that the center of pouring cup comes directly beneath the pouring point of furnace. The coordinates of the center of pouring cup are found by using computer vision algorithms. The output is then transferred to a microcontroller which controls the alignment mechanism on which the mold is placed at the optimum location.

  12. Development of Non-contact Respiratory Monitoring System for Newborn Using a FG Vision Sensor

    Science.gov (United States)

    Kurami, Yoshiyuki; Itoh, Yushi; Natori, Michiya; Ohzeki, Kazuo; Aoki, Yoshimitsu

    In recent years, development of neonatal care is strongly hoped, with increase of the low-birth-weight baby birth rate. Especially respiration of low-birth-weight baby is incertitude because central nerve and respiratory function is immature. Therefore, a low-birth-weight baby often causes a disease of respiration. In a NICU (Neonatal Intensive Care Unit), neonatal respiration is monitored using cardio-respiratory monitor and pulse oximeter at all times. These contact-type sensors can measure respiratory rate and SpO2 (Saturation of Peripheral Oxygen). However, because a contact-type sensor might damage the newborn's skin, it is a real burden to monitor neonatal respiration. Therefore, we developed the respiratory monitoring system for newborn using a FG (Fiber Grating) vision sensor. FG vision sensor is an active stereo vision sensor, it is possible for non-contact 3D measurement. A respiratory waveform is calculated by detecting the vertical motion of the thoracic and abdominal region with respiration. We attempted clinical experiment in the NICU, and confirmed the accuracy of the obtained respiratory waveform was high. Non-contact respiratory monitoring of newborn using a FG vision sensor enabled the minimally invasive procedure.

  13. Design and Assessment of a Machine Vision System for Automatic Vehicle Wheel Alignment

    Directory of Open Access Journals (Sweden)

    Rocco Furferi

    2013-05-01

    Full Text Available Abstract Wheel alignment, consisting of properly checking the wheel characteristic angles against vehicle manufacturers' specifications, is a crucial task in the automotive field since it prevents irregular tyre wear and affects vehicle handling and safety. In recent years, systems based on Machine Vision have been widely studied in order to automatically detect wheels' characteristic angles. In order to overcome the limitations of existing methodologies, due to measurement equipment being mounted onto the wheels, the present work deals with design and assessment of a 3D machine vision-based system for the contactless reconstruction of vehicle wheel geometry, with particular reference to characteristic planes. Such planes, properly referred to as a global coordinate system, are used for determining wheel angles. The effectiveness of the proposed method was tested against a set of measurements carried out using a commercial 3D scanner; the absolute average error in measuring toe and camber angles with the machine vision system resulted in full compatibility with the expected accuracy of wheel alignment systems.

  14. Development of 3D online contact measurement system for intelligent manufacturing based on stereo vision

    Science.gov (United States)

    Li, Peng; Chong, Wenyan; Ma, Yongjun

    2017-10-01

    In order to avoid shortcomings of low efficiency and restricted measuring range exsited in traditional 3D on-line contact measurement method for workpiece size, the development of a novel 3D contact measurement system is introduced, which is designed for intelligent manufacturing based on stereo vision. The developed contact measurement system is characterized with an intergarted use of a handy probe, a binocular stereo vision system, and advanced measurement software.The handy probe consists of six track markers, a touch probe and the associated elcetronics. In the process of contact measurement, the hand probe can be located by the use of the stereo vision system and track markers, and 3D coordinates of a space point on the workpiece can be mearsured by calculating the tip position of a touch probe. With the flexibility of the hand probe, the orientation, range, density of the 3D contact measurenent can be adptable to different needs. Applications of the developed contact measurement system to high-precision measurement and rapid surface digitization are experimentally demonstrated.

  15. Computer graphics testbed to simulate and test vision systems for space applications

    Science.gov (United States)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  16. Cryogenics Vision Workshop for High-Temperature Superconducting Electric Power Systems Proceedings

    International Nuclear Information System (INIS)

    Energetics, Inc.

    2000-01-01

    The US Department of Energy's Superconductivity Program for Electric Systems sponsored the Cryogenics Vision Workshop, which was held on July 27, 1999 in Washington, D.C. This workshop was held in conjunction with the Program's Annual Peer Review meeting. Of the 175 people attending the peer review meeting, 31 were selected in advance to participate in the Cryogenics Vision Workshops discussions. The participants represented cryogenic equipment manufactures, industrial gas manufacturers and distributors, component suppliers, electric power equipment manufacturers (Superconductivity Partnership Initiative participants), electric utilities, federal agencies, national laboratories, and consulting firms. Critical factors were discussed that need to be considered in describing the successful future commercialization of cryogenic systems. Such systems will enable the widespread deployment of high-temperature superconducting (HTS) electric power equipment. Potential research, development, and demonstration (RD and D) activities and partnership opportunities for advancing suitable cryogenic systems were also discussed. The workshop agenda can be found in the following section of this report. Facilitated sessions were held to discuss the following specific focus topics: identifying Critical Factors that need to be included in a Cryogenics Vision for HTS Electric Power Systems (From the HTS equipment end-user perspective) identifying R and D Needs and Partnership Roles (From the cryogenic industry perspective) The findings of the facilitated Cryogenics Vision Workshop were then presented in a plenary session of the Annual Peer Review Meeting. Approximately 120 attendees participated in the afternoon plenary session. This large group heard summary reports from the workshop session leaders and then held a wrap-up session to discuss the findings, cross-cutting themes, and next steps. These summary reports are presented in this document. The ideas and suggestions raised during

  17. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    Energy Technology Data Exchange (ETDEWEB)

    M. D. McKay; M. O. Anderson; R. A. Kinoshita; W. D. Willis

    1999-02-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an ongoing effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the ''feel'' of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  18. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    Energy Technology Data Exchange (ETDEWEB)

    Kinoshita, Robert Arthur; Anderson, Matthew Oley; Mckay, Mark D; Willis, Walter David

    1999-04-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an on going effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the "feel" of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  19. Machine vision guided sensor positioning system for leaf temperature assessment

    Science.gov (United States)

    Kim, Y.; Ling, P. P.; Janes, H. W. (Principal Investigator)

    2001-01-01

    A sensor positioning system was developed for monitoring plants' well-being using a non-contact sensor. Image processing algorithms were developed to identify a target region on a plant leaf. A novel algorithm to recover view depth was developed by using a camera equipped with a computer-controlled zoom lens. The methodology has improved depth recovery resolution over a conventional monocular imaging technique. An algorithm was also developed to find a maximum enclosed circle on a leaf surface so the conical field-of-view of an infrared temperature sensor could be filled by the target without peripheral noise. The center of the enclosed circle and the estimated depth were used to define the sensor 3-D location for accurate plant temperature measurement.

  20. Vision systems for the inspection of resistance welding joints

    Science.gov (United States)

    Hildebrand, Lars; Fathi, Madjid

    2000-06-01

    Many automated quality inspection systems make use of brightness and contrast features of the objects being inspected. This reduces the complexity of the problem solving methods, as well as the demand for computational capacity. Nevertheless a lot of significant information is located in color features of the objects. This paper describes a method, that allows the evaluation of color information in a very compact and efficient way. The described method uses a combination of multi-valued logic and a special color model. We use fuzzy logic as multi-valued logic, and the HSI color model, but any multi-valued logic, that allows rule-based reasoning can be used. The HSI color model can also be exchanged with other color models, if special demands require this.

  1. WELDSMART: A vision-based expert system for quality control

    Science.gov (United States)

    Andersen, Kristinn; Barnett, Robert Joel; Springfield, James F.; Cook, George E.

    1992-01-01

    This work was aimed at exploring means for utilizing computer technology in quality inspection and evaluation. Inspection of metallic welds was selected as the main application for this development and primary emphasis was placed on visual inspection, as opposed to other inspection methods, such as radiographic techniques. Emphasis was placed on methodologies with the potential for use in real-time quality control systems. Because quality evaluation is somewhat subjective, despite various efforts to classify discontinuities and standardize inspection methods, the task of using a computer for both inspection and evaluation was not trivial. The work started out with a review of the various inspection techniques that are used for quality control in welding. Among other observations from this review was the finding that most weld defects result in abnormalities that may be seen by visual inspection. This supports the approach of emphasizing visual inspection for this work. Quality control consists of two phases: (1) identification of weld discontinuities (some of which may be severe enough to be classified as defects), and (2) assessment or evaluation of the weld based on the observed discontinuities. Usually the latter phase results in a pass/fail judgement for the inspected piece. It is the conclusion of this work that the first of the above tasks, identification of discontinuities, is the most challenging one. It calls for sophisticated image processing and image analysis techniques, and frequently ad hoc methods have to be developed to identify specific features in the weld image. The difficulty of this task is generally not due to limited computing power. In most cases it was found that a modest personal computer or workstation could carry out most computations in a reasonably short time period. Rather, the algorithms and methods necessary for identifying weld discontinuities were in some cases limited. The fact that specific techniques were finally developed and

  2. GUBS, a Behavior-based Language for Open System Dedicated to Synthetic Biology

    Directory of Open Access Journals (Sweden)

    Adrien Basso-Blandin

    2012-11-01

    Full Text Available In this article, we propose a domain specific language, GUBS (Genomic Unified Behavior Specification, dedicated to the behavioral specification of synthetic biological devices, viewed as discrete open dynamical systems. GUBS is a rule-based declarative language. By contrast to a closed system, a program is always a partial description of the behavior of the system. The semantics of the language accounts the existence of some hidden non-specified actions possibly altering the behavior of the programmed device. The compilation framework follows a scheme similar to automatic theorem proving, aiming at improving synthetic biological design safety.

  3. Estimation of Theaflavins (TF) and Thearubigins (TR) Ratio in Black Tea Liquor Using Electronic Vision System

    Science.gov (United States)

    Akuli, Amitava; Pal, Abhra; Ghosh, Arunangshu; Bhattacharyya, Nabarun; Bandhopadhyya, Rajib; Tamuly, Pradip; Gogoi, Nagen

    2011-09-01

    Quality of black tea is generally assessed using organoleptic tests by professional tea tasters. They determine the quality of black tea based on its appearance (in dry condition and during liquor formation), aroma and taste. Variation in the above parameters is actually contributed by a number of chemical compounds like, Theaflavins (TF), Thearubigins (TR), Caffeine, Linalool, Geraniol etc. Among the above, TF and TR are the most important chemical compounds, which actually contribute to the formation of taste, colour and brightness in tea liquor. Estimation of TF and TR in black tea is generally done using a spectrophotometer instrument. But, the analysis technique undergoes a rigorous and time consuming effort for sample preparation; also the operation of costly spectrophotometer requires expert manpower. To overcome above problems an Electronic Vision System based on digital image processing technique has been developed. The system is faster, low cost, repeatable and can estimate the amount of TF and TR ratio for black tea liquor with accuracy. The data analysis is done using Principal Component Analysis (PCA), Multiple Linear Regression (MLR) and Multiple Discriminate Analysis (MDA). A correlation has been established between colour of tea liquor images and TF, TR ratio. This paper describes the newly developed E-Vision system, experimental methods, data analysis algorithms and finally, the performance of the E-Vision System as compared to the results of traditional spectrophotometer.

  4. Machine Vision-Based Measurement Systems for Fruit and Vegetable Quality Control in Postharvest.

    Science.gov (United States)

    Blasco, José; Munera, Sandra; Aleixos, Nuria; Cubero, Sergio; Molto, Enrique

    Individual items of any agricultural commodity are different from each other in terms of colour, shape or size. Furthermore, as they are living thing, they change their quality attributes over time, thereby making the development of accurate automatic inspection machines a challenging task. Machine vision-based systems and new optical technologies make it feasible to create non-destructive control and monitoring tools for quality assessment to ensure adequate accomplishment of food standards. Such systems are much faster than any manual non-destructive examination of fruit and vegetable quality, thus allowing the whole production to be inspected with objective and repeatable criteria. Moreover, current technology makes it possible to inspect the fruit in spectral ranges beyond the sensibility of the human eye, for instance in the ultraviolet and near-infrared regions. Machine vision-based applications require the use of multiple technologies and knowledge, ranging from those related to image acquisition (illumination, cameras, etc.) to the development of algorithms for spectral image analysis. Machine vision-based systems for inspecting fruit and vegetables are targeted towards different purposes, from in-line sorting into commercial categories to the detection of contaminants or the distribution of specific chemical compounds on the product's surface. This chapter summarises the current state of the art in these techniques, starting with systems based on colour images for the inspection of conventional colour, shape or external defects and then goes on to consider recent developments in spectral image analysis for internal quality assessment or contaminant detection.

  5. Development and Application of the Stereo Vision Tracking System with Virtual Reality

    Directory of Open Access Journals (Sweden)

    Chia-Sui Wang

    2015-01-01

    Full Text Available A virtual reality (VR driver tracking verification system is created, of which the application to stereo image tracking and positioning accuracy is researched in depth. In the research, the feature that the stereo vision system has image depth is utilized to improve the error rate of image tracking and image measurement. In a VR scenario, the function collecting behavioral data of driver was tested. By means of VR, racing operation is simulated and environmental (special weathers such as raining and snowing and artificial (such as sudden crossing road by pedestrians, appearing of vehicles from dead angles, roadblock variables are added as the base for system implementation. In addition, the implementation is performed with human factors engineered according to sudden conditions that may happen easily in driving. From experimental results, it proves that the stereo vision system created by the research has an image depth recognition error rate within 0.011%. The image tracking error rate may be smaller than 2.5%. In the research, the image recognition function of stereo vision is utilized to accomplish the data collection of driver tracking detection. In addition, the environmental conditions of different simulated real scenarios may also be created through VR.

  6. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    Energy Technology Data Exchange (ETDEWEB)

    D’Emilia, Giulio, E-mail: giulio.demilia@univaq.it; Di Gasbarro, David, E-mail: david.digasbarro@graduate.univaq.it; Gaspari, Antonella, E-mail: antonella.gaspari@graduate.univaq.it; Natale, Emanuela, E-mail: emanuela.natale@univaq.it [University of L’Aquila, Department of Industrial and Information Engineering and Economics (DIIIE), via G. Gronchi, 18, 67100 L’Aquila (Italy)

    2016-06-28

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  7. Position Control and Novel Application of SCARA Robot with Vision System

    Directory of Open Access Journals (Sweden)

    Hsiang-Chen Hsu

    2017-06-01

    Full Text Available In this paper, a SCARA robot arm with vision system has been developed to improve the accuracy of pick-and-place the surface mount device (SMD on PCB during surface mount process. Position of the SCARA robot can be controlled by using coordinate auto-compensation technique. Robotic movement and position control are auto-calculated based on forward and inverse kinematics with enhanced the intelligent image vision system. The determined x-y position and rotation angle can then be applied to the desired pick & place location for the SCARA robot. A series of experiments has been conducted to improve the accuracy of pick-and-place SMDs on PCB.

  8. System of error detection in the manufacture of garments using artificial vision

    Science.gov (United States)

    Moreno, J. J.; Aguila, A.; Partida, E.; Martinez, C. L.; Morales, O.; Tejeida, R.

    2017-12-01

    A computer vision system is implemented to detect errors in the cutting stage within the manufacturing process of garments in the textile industry. It provides solution to errors within the process that cannot be easily detected by any employee, in addition to significantly increase the speed of quality review. In the textile industry as in many others, quality control is required in manufactured products and this has been carried out manually by means of visual inspection by employees over the years. For this reason, the objective of this project is to design a quality control system using computer vision to identify errors in the cutting stage within the garment manufacturing process to increase the productivity of textile processes by reducing costs.

  9. Comparison of a multispectral vision system and a colorimeter for the assessment of meat color

    DEFF Research Database (Denmark)

    Trinderup, Camilla Himmelstrup; Dahl, Anders Bjorholm; Jensen, Kirsten

    2015-01-01

    The color assessment ability of a multispectral vision system is investigated by a comparison study with color measurements from a traditional colorimeter. The experiment involves fresh and processed meat samples. Meat is a complex material; heterogeneous with varying scattering and reflectance...... properties, so several factors can influence the instrumental assessment of meat color. In order to assess whether two methods are equivalent, the variation due to these factors must be taken into account. A statistical analysis was conducted and showed that on a calibration sheet the two instruments...... accounting for other sources of variation leading to the conclusion that color assessment using a multispectral vision system is superior to traditional colorimeter assessments. (C) 2014 Elsevier Ltd. All rights reserved....

  10. Shadow and feature recognition aids for rapid image geo-registration in UAV vision system architectures

    Science.gov (United States)

    Baer, Wolfgang; Kölsch, Mathias

    2009-05-01

    The problem of real-time image geo-referencing is encountered in all vision based cognitive systems. In this paper we present a model-image feedback approach to this problem and show how it can be applied to image exploitation from Unmanned Arial Vehicle (UAV) vision systems. By calculating reference images from a known terrain database, using a novel ray trace algorithm, we are able to eliminate foreshortening, elevation, and lighting distortions, introduce registration aids and reduce the geo-referencing problem to a linear transformation search over the two dimensional image space. A method for shadow calculation that maintains real-time performance is also presented. The paper then discusses the implementation of our model-image feedback approach in the Perspective View Nascent Technology (PVNT) software package and provides sample results from UAV mission control and target mensuration experiments conducted at China Lake and Camp Roberts, California.

  11. Visions, Scenarios and Action Plans Towards Next Generation Tanzania Power System

    Directory of Open Access Journals (Sweden)

    Alex Kyaruzi

    2012-10-01

    Full Text Available This paper presents strategic visions, scenarios and action plans for enhancing Tanzania Power Systems towards next generation Smart Power Grid. It first introduces the present Tanzanian power grid and the challenges ahead in terms of generation capacity, financial aspect, technical and non-technical losses, revenue loss, high tariff, aging infrastructure, environmental impact and the interconnection with the neighboring countries. Then, the current initiatives undertaken by the Tanzania government in response to the present challenges and the expected roles of smart grid in overcoming these challenges in the future with respect to the scenarios presented are discussed. The developed scenarios along with visions and recommended action plans towards the future Tanzanian power system can be exploited at all governmental levels to achieve public policy goals and help develop business opportunities by motivating domestic and international investments in modernizing the nation’s electric power infrastructure. In return, it should help build the green energy economy.

  12. Image processing for a tactile/vision substitution system using digital CNN.

    Science.gov (United States)

    Lin, Chien-Nan; Yu, Sung-Nien; Hu, Jin-Cheng

    2006-01-01

    In view of the parallel processing and easy implementation properties of CNN, we propose to use digital CNN as the image processor of a tactile/vision substitution system (TVSS). The digital CNN processor is used to execute the wavelet down-sampling filtering and the half-toning operations, aiming to extract important features from the images. A template combination method is used to embed the two image processing functions into a single CNN processor. The digital CNN processor is implemented on an intellectual property (IP) and is implemented on a XILINX VIRTEX II 2000 FPGA board. Experiments are designated to test the capability of the CNN processor in the recognition of characters and human subjects in different environments. The experiments demonstrates impressive results, which proves the proposed digital CNN processor a powerful component in the design of efficient tactile/vision substitution systems for the visually impaired people.

  13. Morphological features of the macerated cranial bones registered by the 3D vision system for potential use in forensic anthropology.

    Science.gov (United States)

    Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena

    2013-01-01

    In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.

  14. Dual Clustering in Vision Systems for Robots Deployed for Agricultural Purposes

    OpenAIRE

    Tyryshkin Alexander; Belyaev Alexander

    2016-01-01

    Continuously variable parameters of environment of robots’ functioning complicate their use in agriculture. Record of disturbing actions only by means of software leads to complication of the programs. In turn, this leads to rise in price of the software product and reduction of robot’s operational reliability. The authors suggest carrying out preliminary adaptation of the vision system to environment by means of hardware. Hardware is selected automatically based on artificial intelligence.

  15. Dual Clustering in Vision Systems for Robots Deployed for Agricultural Purposes

    Directory of Open Access Journals (Sweden)

    Tyryshkin Alexander

    2016-01-01

    Full Text Available Continuously variable parameters of environment of robots’ functioning complicate their use in agriculture. Record of disturbing actions only by means of software leads to complication of the programs. In turn, this leads to rise in price of the software product and reduction of robot’s operational reliability. The authors suggest carrying out preliminary adaptation of the vision system to environment by means of hardware. Hardware is selected automatically based on artificial intelligence.

  16. Technical vision system for analysing the mechanical characteristics of bulk materials

    Science.gov (United States)

    Boikov, A. V.; Payor, V. A.; Savelev, R. V.

    2018-01-01

    In this article actual topics concerned with mechanical properties of bulk materials, usage of computer vision and artificial neural networks in this research are discussed. The main principles of the system for analysis of bulk materials mechanical characteristics are described. Bulk material outflow behaviour with predefined parameters (particles shapes and radius, coefficients of friction, etc.) was modelled. The outflow was modelled from the calibrated conical funnel. Obtained dependencies between mechanical characteristics and pile geometrical properties are represented as diagrams and graphs.

  17. Leaf LIMS: A Flexible Laboratory Information Management System with a Synthetic Biology Focus.

    Science.gov (United States)

    Craig, Thomas; Holland, Richard; D'Amore, Rosalinda; Johnson, James R; McCue, Hannah V; West, Anthony; Zulkower, Valentin; Tekotte, Hille; Cai, Yizhi; Swan, Daniel; Davey, Robert P; Hertz-Fowler, Christiane; Hall, Anthony; Caddick, Mark

    2017-12-15

    This paper presents Leaf LIMS, a flexible laboratory information management system (LIMS) designed to address the complexity of synthetic biology workflows. At the project's inception there was a lack of a LIMS designed specifically to address synthetic biology processes, with most systems focused on either next generation sequencing or biobanks and clinical sample handling. Leaf LIMS implements integrated project, item, and laboratory stock tracking, offering complete sample and construct genealogy, materials and lot tracking, and modular assay data capture. Hence, it enables highly configurable task-based workflows and supports data capture from project inception to completion. As such, in addition to it supporting synthetic biology it is ideal for many laboratory environments with multiple projects and users. The system is deployed as a web application through Docker and is provided under a permissive MIT license. It is freely available for download at https://leaflims.github.io .

  18. Portable electronic vision enhancement systems in comparison with optical magnifiers for near vision activities: an economic evaluation alongside a randomized crossover trial.

    Science.gov (United States)

    Bray, Nathan; Brand, Andrew; Taylor, John; Hoare, Zoe; Dickinson, Christine; Edwards, Rhiannon T

    2017-08-01

    To determine the incremental cost-effectiveness of portable electronic vision enhancement system (p-EVES) devices compared with optical low vision aids (LVAs), for improving near vision visual function, quality of life and well-being of people with a visual impairment. An AB/BA randomized crossover trial design was used. Eighty-two participants completed the study. Participants were current users of optical LVAs who had not tried a p-EVES device before and had a stable visual impairment. The trial intervention was the addition of a p-EVES device to the participant's existing optical LVA(s) for 2 months, and the control intervention was optical LVA use only, for 2 months. Cost-effectiveness and cost-utility analyses were conducted from a societal perspective. The mean cost of the p-EVES intervention was £448. Carer costs were £30 (4.46 hr) less for the p-EVES intervention compared with the LVA only control. The mean difference in total costs was £417. Bootstrapping gave an incremental cost-effectiveness ratio (ICER) of £736 (95% CI £481 to £1525) for a 7% improvement in near vision visual function. Cost per quality-adjusted life year (QALY) ranged from £56 991 (lower 95% CI = £19 801) to £66 490 (lower 95% CI = £23 055). Sensitivity analysis varying the commercial price of the p-EVES device reduced ICERs by up to 75%, with cost per QALYs falling below £30 000. Portable electronic vision enhancement system (p-EVES) devices are likely to be a cost-effective use of healthcare resources for improving near vision visual function, but this does not translate into cost-effective improvements in quality of life, capability or well-being. © 2016 The Authors. Acta Ophthalmologica published by John Wiley & Sons Ltd on behalf of Acta Ophthalmologica Scandinavica Foundation and European Association for Vision & Eye Research.

  19. Investigation of the synthetic experiment system of machine equipment fault diagnosis

    Science.gov (United States)

    Liu, Hongyu; Xu, Zening; Yu, Xiaoguang

    2008-12-01

    The invention and manufacturing of the synthetic experiment system of machine equipment fault diagnosis filled in the blank of this kind of experiment equipment in China and obtained national practical new type patent. By the motor speed regulation system, machine equipment fault imitation system, measuring and monitoring system and analysis and diagnosis system of the synthetic experiment system, students can regulate motor speed arbitrarily, imitate multi-kinds of machine equipment parts fault, collect the signals of acceleration, speed, displacement, force and temperature and make multi-kinds of time field, frequency field and figure analysis. The application of the synthetic experiment system in our university's teaching practice has obtained good effect on fostering professional eligibility in measuring, monitoring and fault diagnosis of machine equipment. The synthetic experiment system has the advantages of short training time, quick desirable result and low test cost etc. It suits for spreading in university extraordinarily. If the systematic software was installed in portable computer, user can fulfill measuring, monitoring, signal processing and fault diagnosis on multi-kinds of field machine equipment conveniently. Its market foreground is very good.

  20. Synthetic Transcription Amplifier System for Orthogonal Control of Gene Expression in Saccharomyces cerevisiae.

    Directory of Open Access Journals (Sweden)

    Anssi Rantasalo

    Full Text Available This work describes the development and characterization of a modular synthetic expression system that provides a broad range of adjustable and predictable expression levels in S. cerevisiae. The system works as a fixed-gain transcription amplifier, where the input signal is transferred via a synthetic transcription factor (sTF onto a synthetic promoter, containing a defined core promoter, generating a transcription output signal. The system activation is based on the bacterial LexA-DNA-binding domain, a set of modified, modular LexA-binding sites and a selection of transcription activation domains. We show both experimentally and computationally that the tuning of the system is achieved through the selection of three separate modules, each of which enables an adjustable output signal: 1 the transcription-activation domain of the sTF, 2 the binding-site modules in the output promoter, and 3 the core promoter modules which define the transcription initiation site in the output promoter. The system has a novel bidirectional architecture that enables generation of compact, yet versatile expression modules for multiple genes with highly diversified expression levels ranging from negligible to very strong using one synthetic transcription factor. In contrast to most existing modular gene expression regulation systems, the present system is independent from externally added compounds. Furthermore, the established system was minimally affected by the several tested growth conditions. These features suggest that it can be highly useful in large scale biotechnology applications.

  1. Compensation for positioning error of industrial robot for flexible vision measuring system

    Science.gov (United States)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  2. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement

    Directory of Open Access Journals (Sweden)

    Suzhi Xiao

    2016-04-01

    Full Text Available In order to acquire an accurate three-dimensional (3D measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the ’phase to 3D coordinates transformation’ are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement.

  3. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement.

    Science.gov (United States)

    Xiao, Suzhi; Tao, Wei; Zhao, Hui

    2016-04-28

    In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the 'phase to 3D coordinates transformation' are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement.

  4. The Light Plane Calibration Method of the Laser Welding Vision Monitoring System

    Science.gov (United States)

    Wang, B. G.; Wu, M. H.; Jia, W. P.

    2018-03-01

    According to the aerospace and automobile industry, the sheet steels are the very important parts. In the recent years, laser welding technique had been used to weld the sheet steel part. The seam width between the two parts is usually less than 0.1mm. Because the error of the fixture fixed can’t be eliminated, the welding parts quality can be greatly affected. In order to improve the welding quality, the line structured light is employed in the vision monitoring system to plan the welding path before welding. In order to improve the weld precision, the vision system is located on Z axis of the computer numerical control (CNC) tool. The planar pattern is placed on the X-Y plane of the CNC tool, and the structured light is projected on the planar pattern. The vision system stay at three different positions along the Z axis of the CNC tool, and the camera shoot the image of the planar pattern at every position. Using the calculated the sub-pixel center line of the structure light, the world coordinate of the center light line can be calculated. Thus, the structured light plane can be calculated by fitting the structured light line. Experiment result shows the effective of the proposed method.

  5. Comparison of a multispectral vision system and a colorimeter for the assessment of meat color.

    Science.gov (United States)

    Trinderup, Camilla H; Dahl, Anders; Jensen, Kirsten; Carstensen, Jens Michael; Conradsen, Knut

    2015-04-01

    The color assessment ability of a multispectral vision system is investigated by a comparison study with color measurements from a traditional colorimeter. The experiment involves fresh and processed meat samples. Meat is a complex material; heterogeneous with varying scattering and reflectance properties, so several factors can influence the instrumental assessment of meat color. In order to assess whether two methods are equivalent, the variation due to these factors must be taken into account. A statistical analysis was conducted and showed that on a calibration sheet the two instruments are equally capable of measuring color. Moreover the vision system provides a more color rich assessment of fresh meat samples with a glossier surface, than the colorimeter. Careful studies of the different sources of variation enable an assessment of the order of magnitude of the variability between methods accounting for other sources of variation leading to the conclusion that color assessment using a multispectral vision system is superior to traditional colorimeter assessments. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    Science.gov (United States)

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  7. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    Directory of Open Access Journals (Sweden)

    Shoaib Ehsan

    2015-07-01

    Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  8. Micro-vision servo control of a multi-axis alignment system for optical fiber assembly

    Science.gov (United States)

    Chen, Weihai; Yu, Fei; Qu, Jianliang; Chen, Wenjie; Zhang, Jianbin

    2017-04-01

    This paper describes a novel optical fiber assembly system featuring a multi-axis alignment function based on micro-vision feedback control. It consists of an active parallel alignment mechanism, a passive compensation mechanism, a micro-gripper and a micro-vision servo control system. The active parallel alignment part is a parallelogram-based design with remote-center-of-motion (RCM) function to achieve precise rotation without fatal lateral motion. The passive mechanism, with five degrees of freedom (5-DOF), is used to implement passive compensation for multi-axis errors. A specially designed 1-DOF micro-gripper mounted onto the active parallel alignment platform is adopted to grasp and rotate the optical fiber. A micro-vision system equipped with two charge-coupled device (CCD) cameras is introduced to observe the small field of view and obtain multi-axis errors for servo feedback control. The two CCD cameras are installed in an orthogonal arrangement—thus the errors can be easily measured via the captured images. Meanwhile, a series of tracking and measurement algorithms based on specific features of the target objects are developed. Details of the force and displacement sensor information acquisition in the assembly experiment are also provided. An experiment demonstrates the validity of the proposed visual algorithm by achieving the task of eliminating errors and inserting an optical fiber to the U-groove accurately.

  9. Edge detection algorithms implemented on Bi-i cellular vision system

    Science.gov (United States)

    Karabiber, Fethullah; Arik, Sabri

    2009-02-01

    Bi-i (Bio-inspired) Cellular Vision system is built mainly on Cellular Neural /Nonlinear Networks (CNNs) type (ACE16k) and Digital Signal Processing (DSP) type microprocessors. CNN theory proposed by Chua has advanced properties for image processing applications. In this study, the edge detection algorithms are implemented on the Bi-i Cellular Vision System. Extracting the edge of an image to be processed correctly and fast is of crucial importance for image processing applications. Threshold Gradient based edge detection algorithm is implemented using ACE16k microprocessor. In addition, pre-processing operation is realized by using an image enhancement technique based on Laplacian operator. Finally, morphologic operations are performed as post processing operations. Sobel edge detection algorithm is performed by convolving sobel operators with the image in the DSP. The performances of the edge detection algorithms are compared using visual inspection and timing analysis. Experimental results show that the ACE16k has great computational power and Bi-i Cellular Vision System is very qualified to apply image processing algorithms in real time.

  10. Micro-vision servo control of a multi-axis alignment system for optical fiber assembly

    International Nuclear Information System (INIS)

    Chen, Weihai; Yu, Fei; Qu, Jianliang; Chen, Wenjie; Zhang, Jianbin

    2017-01-01

    This paper describes a novel optical fiber assembly system featuring a multi-axis alignment function based on micro-vision feedback control. It consists of an active parallel alignment mechanism, a passive compensation mechanism, a micro-gripper and a micro-vision servo control system. The active parallel alignment part is a parallelogram-based design with remote-center-of-motion (RCM) function to achieve precise rotation without fatal lateral motion. The passive mechanism, with five degrees of freedom (5-DOF), is used to implement passive compensation for multi-axis errors. A specially designed 1-DOF micro-gripper mounted onto the active parallel alignment platform is adopted to grasp and rotate the optical fiber. A micro-vision system equipped with two charge-coupled device (CCD) cameras is introduced to observe the small field of view and obtain multi-axis errors for servo feedback control. The two CCD cameras are installed in an orthogonal arrangement—thus the errors can be easily measured via the captured images. Meanwhile, a series of tracking and measurement algorithms based on specific features of the target objects are developed. Details of the force and displacement sensor information acquisition in the assembly experiment are also provided. An experiment demonstrates the validity of the proposed visual algorithm by achieving the task of eliminating errors and inserting an optical fiber to the U-groove accurately. (paper)

  11. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    Science.gov (United States)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  12. FPGA-based multimodal embedded sensor system integrating low- and mid-level vision.

    Science.gov (United States)

    Botella, Guillermo; Martín H, José Antonio; Santos, Matilde; Meyer-Baese, Uwe

    2011-01-01

    Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms.

  13. Design and Development of a High Speed Sorting System Based on Machine Vision Guiding

    Science.gov (United States)

    Zhang, Wenchang; Mei, Jiangping; Ding, Yabin

    In this paper, a vision-based control strategy to perform high speed pick-and-place tasks on automation product line is proposed, and relevant control software is develop. Using Delta robot to control a sucker to grasp disordered objects from one moving conveyer and then place them on the other in order. CCD camera gets one picture every time the conveyer moves a distance of ds. Objects position and shape are got after image processing. Target tracking method based on "Servo motor + synchronous conveyer" is used to fulfill the high speed porting operation real time. Experiments conducted on Delta robot sorting system demonstrate the efficiency and validity of the proposed vision-control strategy.

  14. Design and Implementation of a Fully Autonomous UAV's Navigator Based on Omni-directional Vision System

    Directory of Open Access Journals (Sweden)

    Seyed Mohammadreza Kasaei

    2011-12-01

    Full Text Available Unmanned Aerial Vehicles (UAVs are the subject of an increasing interest in many applications . UAVs are seeing more widespread use in military, scenic, and civilian sectors in recent years. Autonomy is one of the major advantages of these vehicles. It is then necessary to develop particular sensor in order to provide efficient navigation functions. The helicopter has been stabilized with visual information through the control loop. Omni directional vision can be a useful sensor for this propose. It can be used as the only sensor or as complementary sensor. In this paper , we propose a novel method for path planning on an UAV based on electrical potential .We are using an omni directional vision system for navigating and path planning.

  15. Vision-Inspection System for Residue Monitoring of Ready-Mixed Concrete Trucks

    Directory of Open Access Journals (Sweden)

    Deok-Seok Seo

    2015-01-01

    Full Text Available The objective of this study is to propose a vision-inspection system that improves the quality management for ready-mixed concrete (RMC. The proposed system can serve as an alternative to the current visual inspection method for the detection of residues in agitator drum of RMC truck. To propose the system, concept development and the system-level design should be executed. The design considerations of the system are derived from the hardware properties of RMC truck and the conditions of RMC factory, and then 6 major components of the system are selected in the stage of system level design. The prototype of system was applied to a real RMC plant and tested for verification of its utility and efficiency. It is expected that the proposed system can be employed as a practical means to increase the efficiency of quality management for RMC.

  16. Context-specific energy strategies: coupling energy system visions with feasible implementation scenarios.

    Science.gov (United States)

    Trutnevyte, Evelina; Stauffacher, Michael; Schlegel, Matthias; Scholz, Roland W

    2012-09-04

    Conventional energy strategy defines an energy system vision (the goal), energy scenarios with technical choices and an implementation mechanism (such as economic incentives). Due to the lead of a generic vision, when applied in a specific regional context, such a strategy can deviate from the optimal one with, for instance, the lowest environmental impacts. This paper proposes an approach for developing energy strategies by simultaneously, rather than sequentially, combining multiple energy system visions and technically feasible, cost-effective energy scenarios that meet environmental constraints at a given place. The approach is illustrated by developing a residential heat supply strategy for a Swiss region. In the analyzed case, urban municipalities should focus on reducing heat demand, and rural municipalities should focus on harvesting local energy sources, primarily wood. Solar thermal units are cost-competitive in all municipalities, and their deployment should be fostered by information campaigns. Heat pumps and building refurbishment are not competitive; thus, economic incentives are essential, especially for urban municipalities. In rural municipalities, wood is cost-competitive, and community-based initiatives are likely to be most successful. Thus, the paper shows that energy strategies should be spatially differentiated. The suggested approach can be transferred to other regions and spatial scales.

  17. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC

    Directory of Open Access Journals (Sweden)

    Zhangwei Chen

    2013-03-01

    Full Text Available This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users’ configuration data. The Sum of Absolute Differences (SAD algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels.

  18. Infrared machine vision system for the automatic detection of olive fruit quality.

    Science.gov (United States)

    Guzmán, Elena; Baeten, Vincent; Pierna, Juan Antonio Fernández; García-Mesa, José A

    2013-11-15

    External quality is an important factor in the extraction of olive oil and the marketing of olive fruits. The appearance and presence of external damage are factors that influence the quality of the oil extracted and the perception of consumers, determining the level of acceptance prior to purchase in the case of table olives. The aim of this paper is to report on artificial vision techniques developed for the online estimation of olive quality and to assess the effectiveness of these techniques in evaluating quality based on detecting external defects. This method of classifying olives according to the presence of defects is based on an infrared (IR) vision system. Images of defects were acquired using a digital monochrome camera with band-pass filters on near-infrared (NIR). The original images were processed using segmentation algorithms, edge detection and pixel value intensity to classify the whole fruit. The detection of the defect involved a pixel classification procedure based on nonparametric models of the healthy and defective areas of olives. Classification tests were performed on olives to assess the effectiveness of the proposed method. This research showed that the IR vision system is a useful technology for the automatic assessment of olives that has the potential for use in offline inspection and for online sorting for defects and the presence of surface damage, easily distinguishing those that do not meet minimum quality requirements. Crown Copyright © 2013 Published by Elsevier B.V. All rights reserved.

  19. SAD-based stereo vision machine on a System-on-Programmable-Chip (SoPC).

    Science.gov (United States)

    Zhang, Xiang; Chen, Zhangwei

    2013-03-04

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels.

  20. Complete Vision-Based Traffic Sign Recognition Supported by an I2V Communication System

    Directory of Open Access Journals (Sweden)

    Miguel Gavilán

    2012-01-01

    Full Text Available This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM. A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance.

  1. Complete vision-based traffic sign recognition supported by an I2V communication system.

    Science.gov (United States)

    García-Garrido, Miguel A; Ocaña, Manuel; Llorca, David F; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel

    2012-01-01

    This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance.

  2. Vision 2040: A Roadmap for Integrated, Multiscale Modeling and Simulation of Materials and Systems

    Science.gov (United States)

    Liu, Xuan; Furrer, David; Kosters, Jared; Holmes, Jack

    2018-01-01

    Over the last few decades, advances in high-performance computing, new materials characterization methods, and, more recently, an emphasis on integrated computational materials engineering (ICME) and additive manufacturing have been a catalyst for multiscale modeling and simulation-based design of materials and structures in the aerospace industry. While these advances have driven significant progress in the development of aerospace components and systems, that progress has been limited by persistent technology and infrastructure challenges that must be overcome to realize the full potential of integrated materials and systems design and simulation modeling throughout the supply chain. As a result, NASA's Transformational Tools and Technology (TTT) Project sponsored a study (performed by a diverse team led by Pratt & Whitney) to define the potential 25-year future state required for integrated multiscale modeling of materials and systems (e.g., load-bearing structures) to accelerate the pace and reduce the expense of innovation in future aerospace and aeronautical systems. This report describes the findings of this 2040 Vision study (e.g., the 2040 vision state; the required interdependent core technical work areas, Key Element (KE); identified gaps and actions to close those gaps; and major recommendations) which constitutes a community consensus document as it is a result of over 450 professionals input obtain via: 1) four society workshops (AIAA, NAFEMS, and two TMS), 2) community-wide survey, and 3) the establishment of 9 expert panels (one per KE) consisting on average of 10 non-team members from academia, government and industry to review, update content, and prioritize gaps and actions. The study envisions the development of a cyber-physical-social ecosystem comprised of experimentally verified and validated computational models, tools, and techniques, along with the associated digital tapestry, that impacts the entire supply chain to enable cost

  3. Computational approaches to vision

    Science.gov (United States)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  4. Applications of membrane computing in systems and synthetic biology

    CERN Document Server

    Gheorghe, Marian; Pérez-Jiménez, Mario

    2014-01-01

    Membrane Computing was introduced as a computational paradigm in Natural Computing. The models introduced, called Membrane (or P) Systems, provide a coherent platform to describe and study living cells as computational systems. Membrane Systems have been investigated for their computational aspects and employed to model problems in other fields, like: Computer Science, Linguistics, Biology, Economy, Computer Graphics, Robotics, etc. Their inherent parallelism, heterogeneity and intrinsic versatility allow them to model a broad range of processes and phenomena, being also an efficient means to solve and analyze problems in a novel way. Membrane Computing has been used to model biological systems, becoming with time a thorough modeling paradigm comparable, in its modeling and predicting capabilities, to more established models in this area. This book is the result of the need to collect, in an organic way, different facets of this paradigm. The chapters of this book, together with the web pages accompanying th...

  5. Engineering plant metabolism into microbes: from systems biology to synthetic biology.

    Science.gov (United States)

    Xu, Peng; Bhan, Namita; Koffas, Mattheos A G

    2013-04-01

    Plant metabolism represents an enormous repository of compounds that are of pharmaceutical and biotechnological importance. Engineering plant metabolism into microbes will provide sustainable solutions to produce pharmaceutical and fuel molecules that could one day replace substantial portions of the current fossil-fuel based economy. Metabolic engineering entails targeted manipulation of biosynthetic pathways to maximize yields of desired products. Recent advances in Systems Biology and the emergence of Synthetic Biology have accelerated our ability to design, construct and optimize cell factories for metabolic engineering applications. Progress in predicting and modeling genome-scale metabolic networks, versatile gene assembly platforms and delicate synthetic pathway optimization strategies has provided us exciting opportunities to exploit the full potential of cell metabolism. In this review, we will discuss how systems and synthetic biology tools can be integrated to create tailor-made cell factories for efficient production of natural products and fuel molecules in microorganisms. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Co-culture systems and technologies: taking synthetic biology to the next level.

    Science.gov (United States)

    Goers, Lisa; Freemont, Paul; Polizzi, Karen M

    2014-07-06

    Co-culture techniques find myriad applications in biology for studying natural or synthetic interactions between cell populations. Such techniques are of great importance in synthetic biology, as multi-species cell consortia and other natural or synthetic ecology systems are widely seen to hold enormous potential for foundational research as well as novel industrial, medical and environmental applications with many proof-of-principle studies in recent years. What is needed for co-cultures to fulfil their potential? Cell-cell interactions in co-cultures are strongly influenced by the extracellular environment, which is determined by the experimental set-up, which therefore needs to be given careful consideration. An overview of existing experimental and theoretical co-culture set-ups in synthetic biology and adjacent fields is given here, and challenges and opportunities involved in such experiments are discussed. Greater focus on foundational technology developments for co-cultures is needed for many synthetic biology systems to realize their potential in both applications and answering biological questions. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  7. Development and Implementation of New Control Law for Vision Based Target Tracking System Onboard Small Unmanned Aerial Vehicles

    National Research Council Canada - National Science Library

    Chong, Tay B

    2006-01-01

    ...) system onboard a small unmanned aerial vehicle (SUAV). The new control law allows for coordinated SUAV guidance and vision-based target tracking of stationary and moving targets in the presence of atmospheric disturbances and measurements noise...

  8. A novel synthetic test system for thyristor level in the converter valve of HVDC power transmission

    Directory of Open Access Journals (Sweden)

    Liu Longchen

    2016-01-01

    Full Text Available The converter valve is the core equipment in the HVDC power transmission system, a+-nd its performance has a direct effect on the reliability, stability and efficiency of the whole power system. As the basic unit of HVDC converter valve, the thyristor level needs to be test routinely in order to grasp the state of the converter valve equipment. Therefore, it is urgent to develop a novel synthetic test system for the thyristor level with thyristor control unit (TCU. However, currently there is no specific test scheme for the thyristor level of HVDC converter valve. In this paper, the synthetic test principle, content and methods for the thyristor level with TCU are presented based on the analysis of the thyristor reverse recovery characteristic and the IEC technology standard. And a transient high-voltage pulse is applied to the thyristor level during its reverse recovery period in order to test the characteristics of thyristor level. Then, the synthetic test system for the thyristor level is applied to the converter valve test of ±800 kV HVDC power transmission project, and the practical test result verifies the reasonability and validity of the proposed synthetic test system.

  9. SARUS: A Synthetic Aperture Real-Time Ultrasound System

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Holten-Lund, Hans; Nilsson, Ronnie Thorup

    2013-01-01

    -resolution images/s. Both RF element data and beamformed data can be stored in the system for later storage and processing. The stored data can be transferred in parallel using the system’s sixty-four 1-Gbit Ethernet interfaces at a theoretical rate of 3.2 GB/s to a 144-core Linux cluster....

  10. EAST-AIA deployment under vacuum: Calibration of laser diagnostic system using computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yang, E-mail: yangyang@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Song, Yuntao; Cheng, Yong; Feng, Hansheng; Wu, Zhenwei; Li, Yingying; Sun, Yongjun; Zheng, Lei [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Bruno, Vincent; Eric, Villedieu [CEA-IRFM, F-13108 Saint-Paul-Lez-Durance (France)

    2016-11-15

    Highlights: • The first deployment of the EAST articulated inspection arm robot under vacuum is presented. • A computer vision based approach to measure the laser spot displacement is proposed. • An experiment on the real EAST tokamak is performed to validate the proposed measure approach, and the results shows that the measurement accuracy satisfies the requirement. - Abstract: For the operation of EAST tokamak, it is crucial to ensure that all the diagnostic systems are in the good condition in order to reflect the plasma status properly. However, most of the diagnostic systems are mounted inside the tokamak vacuum vessel, which makes them extremely difficult to maintain under high vacuum condition during the tokamak operation. Thanks to a system called EAST articulated inspection arm robot (EAST-AIA), the examination of these in-vessel diagnostic systems can be performed by an embedded camera carried by the robot. In this paper, a computer vision algorithm has been developed to calibrate a laser diagnostic system with the help of a monocular camera at the robot end. In order to estimate the displacement of the laser diagnostic system with respect to the vacuum vessel, several visual markers were attached to the inner wall. This experiment was conducted both on the EAST vacuum vessel mock-up and the real EAST tokamak under vacuum condition. As a result, the accuracy of the displacement measurement was within 3 mm under the current camera resolution, which satisfied the laser diagnostic system calibration.

  11. EAST-AIA deployment under vacuum: Calibration of laser diagnostic system using computer vision

    International Nuclear Information System (INIS)

    Yang, Yang; Song, Yuntao; Cheng, Yong; Feng, Hansheng; Wu, Zhenwei; Li, Yingying; Sun, Yongjun; Zheng, Lei; Bruno, Vincent; Eric, Villedieu

    2016-01-01

    Highlights: • The first deployment of the EAST articulated inspection arm robot under vacuum is presented. • A computer vision based approach to measure the laser spot displacement is proposed. • An experiment on the real EAST tokamak is performed to validate the proposed measure approach, and the results shows that the measurement accuracy satisfies the requirement. - Abstract: For the operation of EAST tokamak, it is crucial to ensure that all the diagnostic systems are in the good condition in order to reflect the plasma status properly. However, most of the diagnostic systems are mounted inside the tokamak vacuum vessel, which makes them extremely difficult to maintain under high vacuum condition during the tokamak operation. Thanks to a system called EAST articulated inspection arm robot (EAST-AIA), the examination of these in-vessel diagnostic systems can be performed by an embedded camera carried by the robot. In this paper, a computer vision algorithm has been developed to calibrate a laser diagnostic system with the help of a monocular camera at the robot end. In order to estimate the displacement of the laser diagnostic system with respect to the vacuum vessel, several visual markers were attached to the inner wall. This experiment was conducted both on the EAST vacuum vessel mock-up and the real EAST tokamak under vacuum condition. As a result, the accuracy of the displacement measurement was within 3 mm under the current camera resolution, which satisfied the laser diagnostic system calibration.

  12. High-accuracy microassembly by intelligent vision systems and smart sensor integration

    Science.gov (United States)

    Schilp, Johannes; Harfensteller, Mark; Jacob, Dirk; Schilp, Michael

    2003-10-01

    Innovative production processes and strategies from batch production to high volume scale are playing a decisive role in generating microsystems economically. In particular assembly processes are crucial operations during the production of microsystems. Due to large batch sizes many microsystems can be produced economically by conventional assembly techniques using specialized and highly automated assembly systems. At laboratory stage microsystems are mostly assembled by hand. Between these extremes there is a wide field of small and middle sized batch production wherefore common automated solutions rarely are profitable. For assembly processes at these batch sizes a flexible automated assembly system has been developed at the iwb. It is based on a modular design. Actuators like grippers, dispensers or other process tools can easily be attached due to a special tool changing system. Therefore new joining techniques can easily be implemented. A force-sensor and a vision system are integrated into the tool head. The automated assembly processes are based on different optical sensors and smart actuators like high-accuracy robots or linear-motors. A fiber optic sensor is integrated in the dispensing module to measure contactless the clearance between the dispense needle and the substrate. Robot vision systems using the strategy of optical pattern recognition are also implemented as modules. In combination with relative positioning strategies, an assembly accuracy of the assembly system of less than 3 μm can be realized. A laser system is used for manufacturing processes like soldering.

  13. Vision sensor and dual MEMS gyroscope integrated system for attitude determination on moving base

    Science.gov (United States)

    Guo, Xiaoting; Sun, Changku; Wang, Peng; Huang, Lu

    2018-01-01

    To determine the relative attitude between the objects on a moving base and the base reference system by a MEMS (Micro-Electro-Mechanical Systems) gyroscope, the motion information of the base is redundant, which must be removed from the gyroscope. Our strategy is to add an auxiliary gyroscope attached to the reference system. The master gyroscope is to sense the total motion, and the auxiliary gyroscope is to sense the motion of the moving base. By a generalized difference method, relative attitude in a non-inertial frame can be determined by dual gyroscopes. With the vision sensor suppressing accumulative drift of the MEMS gyroscope, the vision and dual MEMS gyroscope integration system is formed. Coordinate system definitions and spatial transform are executed in order to fuse inertial and visual data from different coordinate systems together. And a nonlinear filter algorithm, Cubature Kalman filter, is used to fuse slow visual data and fast inertial data together. A practical experimental setup is built up and used to validate feasibility and effectiveness of our proposed attitude determination system in the non-inertial frame on the moving base.

  14. Data fusion for a vision-aided radiological detection system: Calibration algorithm performance

    Science.gov (United States)

    Stadnikia, Kelsey; Henderson, Kristofer; Martin, Allan; Riley, Phillip; Koppal, Sanjeev; Enqvist, Andreas

    2018-05-01

    In order to improve the ability to detect, locate, track and identify nuclear/radiological threats, the University of Florida nuclear detection community has teamed up with the 3D vision community to collaborate on a low cost data fusion system. The key is to develop an algorithm to fuse the data from multiple radiological and 3D vision sensors as one system. The system under development at the University of Florida is being assessed with various types of radiological detectors and widely available visual sensors. A series of experiments were devised utilizing two EJ-309 liquid organic scintillation detectors (one primary and one secondary), a Microsoft Kinect for Windows v2 sensor and a Velodyne HDL-32E High Definition LiDAR Sensor which is a highly sensitive vision sensor primarily used to generate data for self-driving cars. Each experiment consisted of 27 static measurements of a source arranged in a cube with three different distances in each dimension. The source used was Cf-252. The calibration algorithm developed is utilized to calibrate the relative 3D-location of the two different types of sensors without need to measure it by hand; thus, preventing operator manipulation and human errors. The algorithm can also account for the facility dependent deviation from ideal data fusion correlation. Use of the vision sensor to determine the location of a sensor would also limit the possible locations and it does not allow for room dependence (facility dependent deviation) to generate a detector pseudo-location to be used for data analysis later. Using manually measured source location data, our algorithm-predicted the offset detector location within an average of 20 cm calibration-difference to its actual location. Calibration-difference is the Euclidean distance from the algorithm predicted detector location to the measured detector location. The Kinect vision sensor data produced an average calibration-difference of 35 cm and the HDL-32E produced an average

  15. Coupon Test of an Elbow Component by Using Vision-based Measurement System

    International Nuclear Information System (INIS)

    Kim, Sung Wan; Jeon, Bub Gyu; Choi, Hyoung Suk; Kim, Nam Sik

    2016-01-01

    Among the various methods to overcome this shortcoming, vision-based methods to measure the strain of a structure are being proposed and many studies are being conducted on them. The vision-based measurement method is a noncontact method for measuring displacement and strain of objects by comparing between images before and after deformation. This method offers such advantages as no limitations in the surface condition, temperature, and shape of objects, the possibility of full filed measurement, and the possibility of measuring the distribution of stress or defects of structures based on the measurement results of displacement and strain in a map. The strains were measured with various methods using images in coupon test and the measurements were compared. In the future, the validity of the algorithm will be compared using stain gauge and clip gage, and based on the results, the physical properties of materials will be measured using a vision-based measurement system. This will contribute to the evaluation of reliability and effectiveness which are required for investigating local damages

  16. ROV-based Underwater Vision System for Intelligent Fish Ethology Research

    Directory of Open Access Journals (Sweden)

    Rui Nian

    2013-09-01

    Full Text Available Fish ethology is a prospective discipline for ocean surveys. In this paper, one ROV-based system is established to perform underwater visual tasks with customized optical sensors installed. One image quality enhancement method is first presented in the context of creating underwater imaging models combined with homomorphic filtering and wavelet decomposition. The underwater vision system can further detect and track swimming fish from the resulting images with the strategies developed using curve evolution and particular filtering, in order to obtain a deeper understanding of fish behaviours. The simulation results have shown the excellent performance of the developed scheme, in regard to both robustness and effectiveness.

  17. An Automated Mouse Tail Vascular Access System by Vision and Pressure Feedback.

    Science.gov (United States)

    Chang, Yen-Chi; Berry-Pusey, Brittany; Yasin, Rashid; Vu, Nam; Maraglia, Brandon; Chatziioannou, Arion X; Tsao, Tsu-Chin

    2015-08-01

    This paper develops an automated vascular access system (A-VAS) with novel vision-based vein and needle detection methods and real-time pressure feedback for murine drug delivery. Mouse tail vein injection is a routine but critical step for preclinical imaging applications. Due to the small vein diameter and external disturbances such as tail hair, pigmentation, and scales, identifying vein location is difficult and manual injections usually result in poor repeatability. To improve the injection accuracy, consistency, safety, and processing time, A-VAS was developed to overcome difficulties in vein detection noise rejection, robustness in needle tracking, and visual servoing integration with the mechatronics system.

  18. An Application of Computer Vision Systems to Solve the Problem of Unmanned Aerial Vehicle Control

    Directory of Open Access Journals (Sweden)

    Aksenov Alexey Y.

    2014-09-01

    Full Text Available The paper considers an approach for application of computer vision systems to solve the problem of unmanned aerial vehicle control. The processing of images obtained through onboard camera is required for absolute positioning of aerial platform (automatic landing and take-off, hovering etc. used image processing on-board camera. The proposed method combines the advantages of existing systems and gives the ability to perform hovering over a given point, the exact take-off and landing. The limitations of implemented methods are determined and the algorithm is proposed to combine them in order to improve the efficiency.

  19. Generating Systems Biology Markup Language Models from the Synthetic Biology Open Language.

    Science.gov (United States)

    Roehner, Nicholas; Zhang, Zhen; Nguyen, Tramy; Myers, Chris J

    2015-08-21

    In the context of synthetic biology, model generation is the automated process of constructing biochemical models based on genetic designs. This paper discusses the use cases for model generation in genetic design automation (GDA) software tools and introduces the foundational concepts of standards and model annotation that make this process useful. Finally, this paper presents an implementation of model generation in the GDA software tool iBioSim and provides an example of generating a Systems Biology Markup Language (SBML) model from a design of a 4-input AND sensor written in the Synthetic Biology Open Language (SBOL).

  20. Synthetic-gauge-field-induced Dirac semimetal state in an acoustic resonator system

    Science.gov (United States)

    Yang, Zhaoju; Gao, Fei; Shi, Xihang; Zhang, Baile

    2016-12-01

    Recently, a proposal of synthetic gauge field in reduced two-dimensional (2D) system from three-dimensional (3D) acoustic structure shows an analogue of the gapped Haldane model with fixed k z , and achieves the gapless Weyl semimetal phase in 3D momentum space. Here, extending this approach of synthetic gauge flux, we propose a reduced square lattice of acoustic resonators, which exhibits Dirac nodes with broken effective time-reversal symmetry. Protected by an additional hidden symmetry, these Dirac nodes with quantized values of topological charge are characterized by nonzero winding number and the finite structure exhibits flat edge modes that cannot be destroyed by perturbations.

  1. [New polymer-drug systems based on natural and synthetic polymers].

    Science.gov (United States)

    Racoviţă, Stefania; Vasiliu, Silvia; Foia, Liliana

    2010-01-01

    The great versatility of polymers makes them very useful in the biomedical and pharmaceutical fields. The combination of natural and synthetic polymers leads to new materials with tailored functional properties. The aim of this work consists in the preparation of new drug delivery system based on chitosan (natural polymer) and polybetaines (synthetic polymers), by a simple process, well known in the literature as complex coacervation methods. Also, the adsorption and release studies of two antibiotics as well as the preservation of their bactericidal capacities were performed.

  2. Energy System Analysis of Solid Oxide Electrolysis cells for Synthetic Fuel Production

    DEFF Research Database (Denmark)

    Ridjan, Iva; Mathiesen, Brian Vad; Connolly, David

    2013-01-01

    system by balancing and storing excess electricity is essential. One of the possible solutions is the use of electrolysers for the production of synthetic fuels based on carbon sources and hydrogen, providing a way to store electricity in the form of fuel that can be either used in other energy sectors...... that require high energy density fuels or reused for power generation. The purpose of this paper is to provide an overview of fuel production cost for two types of synthetic fuels – methanol and methane, and comparable costs of biodiesel, bioethanol and biogas....

  3. Acquisition And Processing Of Range Data Using A Laser Scanner-Based 3-D Vision System

    Science.gov (United States)

    Moring, I.; Ailisto, H.; Heikkinen, T.; Kilpela, A.; Myllyla, R.; Pietikainen, M.

    1988-02-01

    In our paper we describe a 3-D vision system designed and constructed at the Technical Research Centre of Finland in co-operation with the University of Oulu. The main application fields our 3-D vision system was developed for are geometric measurements of large objects and manipulator and robot control tasks. It seems to be potential in automatic vehicle guidance applications, too. The system has now been operative for about one year and its performance has been extensively tested. Recently we have started a field test phase to evaluate its performance in real industrial tasks and environments. The system consists of three main units: the range finder, the scanner and the computer. The range finder is based on the direct measurement of the time-of-flight of a laser pulse. The time-interval between the transmitted and the received light pulses is converted into a continuous analog voltage, which is amplified, filtered and offset-corrected to produce the range information. The scanner consists of two mirrors driven by moving iron galvanometers. This system is controlled by servo amplifiers. The computer unit controls the scanner, transforms the measured coordinates into a cartesian coordinate system and serves as a user interface and postprocessing environment. Methods for segmenting the range image into a higher level description have been developed. The description consists of planar and curved surfaces and their features and relations. Parametric surface representations based on the Ferguson surface patch are studied, too.

  4. Optimization of dynamic envelope measurement system for high speed train based on monocular vision

    Science.gov (United States)

    Wu, Bin; Liu, Changjie; Fu, Luhua; Wang, Zhong

    2018-01-01

    The definition of dynamic envelope curve is the maximum limit outline caused by various adverse effects during the running process of the train. It is an important base of making railway boundaries. At present, the measurement work of dynamic envelope curve of high-speed vehicle is mainly achieved by the way of binocular vision. There are some problems of the present measuring system like poor portability, complicated process and high cost. A new measurement system based on the monocular vision measurement theory and the analysis on the test environment is designed and the measurement system parameters, the calibration of camera with wide field of view, the calibration of the laser plane are designed and optimized in this paper. The accuracy has been verified to be up to 2mm by repeated tests and experimental data analysis. The feasibility and the adaptability of the measurement system is validated. There are some advantages of the system like lower cost, a simpler measurement and data processing process, more reliable data. And the system needs no matching algorithm.

  5. Frequency Stability Enhancement for Low Inertia Systems using Synthetic Inertia of Wind Power

    DEFF Research Database (Denmark)

    Nguyen, Ha Thi; Yang, Guangya; Nielsen, Arne Hejde

    2017-01-01

    -based system using-real time digital simulator (RTDS) to propose the best one for the synthetic inertia controller. From the comparative simulation results, it can be concluded that the method using a combination of both the frequency deviation and derivative as input signals, and the under-frequency trigger...

  6. A Unique Model Platform for C4 Plant Systems and Synthetic Biology

    Science.gov (United States)

    2015-12-10

    mediated transformation of Setaria viridis. Agrobacterium tumefaciens strain GV3101 was transformed by electroporation with pBI 121. Agrobacterium ... agrobacterium mediated transformation 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER OF...successful agrobacterium mediated transformation 15. SUBJECT TERMS synthetic biology, Systems Biology 16. SECURITY CLASSIFICATION OF

  7. Performance Evaluation of a Synthetic Aperture Real-Time Ultrasound System

    DEFF Research Database (Denmark)

    Stuart, Matthias Bo; Tomov, Borislav Gueorguiev; Jensen, Jørgen Arendt

    2011-01-01

    This paper evaluates the signal-to-noise ratio, the time stability, and the phase difference of the sampling in the experimental ultrasound scanner SARUS: A synthetic aperture, real-time ultrasound system. SARUS has 1024 independent transmit and receive channels and is capable of handling 2D probes...

  8. Long-term energy output estimation for photovoltaic energy systems using synthetic solar irradiation data

    International Nuclear Information System (INIS)

    Celik, A.N.

    2003-01-01

    A general methodology is presented to estimate the monthly average daily energy output from photovoltaic energy systems. Energy output is estimated from synthetically generated solar radiation data. The synthetic solar radiation data are generated based on the cumulative frequency distribution of the daily clearness index, given as a function of the monthly clearness index. Two sets of synthetic solar irradiation data are generated: 3- and 4-day months. In the 3-day month, each month is represented by 3 days and in the 4-day month, by 4 days. The 3- and 4-day solar irradiation data are synthetically generated for each month and the corresponding energy outputs are calculated. A total of 8-year long measured hourly solar irradiation data, from five different locations in the world, is used to validate the new model. The monthly energy output values calculated from the synthetic solar irradiation data are compared to those calculated from the measured hour-by-hour data. It is shown that when the measured solar radiation data do not exist for a particular location or reduced data set is advantageous, the energy output from photovoltaic converters could be correctly calculated

  9. An improved fuzzy synthetic condition assessment of a wind turbine generator system

    DEFF Research Database (Denmark)

    Li, H.; Hu, Y. G.; Yang, Chao

    2013-01-01

    This paper presents an improved fuzzy synthetic model that is based on a real-time condition assessment method of a grid-connected wind turbine generator system (WTGS) to improve the operational reliability and optimize the maintenance strategy. First, a condition assessment framework is proposed...... by analyzing the monitoring data of the WTGS. An improved fuzzy synthetic condition assessment method is then proposed that utilizes the concepts of deterioration degree, dynamic limited values and variable weight calculations of the assessment indices. Finally, by using on-line monitoring data of an actual...... 850 kW WTGS, real-time condition assessments are performed that utilize the proposed fuzzy synthetic method; the model’s effectiveness is also compared to a traditional fuzzy assessment method in which constant limited values and constant weights are adopted. The results show that the condition...

  10. Erythrocytes-based synthetic delivery systems: transition from conventional to novel engineering strategies.

    Science.gov (United States)

    Bhateria, Manisha; Rachumallu, Ramakrishna; Singh, Rajbir; Bhatta, Rabi Sankar

    2014-08-01

    Erythrocytes (red blood cells [RBCs]) and artificial or synthetic delivery systems such as liposomes, nanoparticles (NPs) are the most investigated carrier systems. Herein, progress made from conventional approach of using RBC as delivery systems to novel approach of using synthetic delivery systems based on RBC properties will be reviewed. We aim to highlight both conventional and novel approaches of using RBCs as potential carrier system. Conventional approaches which include two main strategies are: i) directly loading therapeutic moieties in RBCs; and ii) coupling them with RBCs whereas novel approaches exploit structural, mechanical and biological properties of RBCs to design synthetic delivery systems through various engineering strategies. Initial attempts included coupling of antibodies to liposomes to specifically target RBCs. Knowledge obtained from several studies led to the development of RBC membrane derived liposomes (nanoerythrosomes), inspiring future application of RBC or its structural features in other attractive delivery systems (hydrogels, filomicelles, microcapsules, micro- and NPs) for even greater potential. In conclusion, this review dwells upon comparative analysis of various conventional and novel engineering strategies in developing RBC based drug delivery systems, diversifying their applications in arena of drug delivery. Regardless of the challenges in front of us, RBC based delivery systems offer an exciting approach of exploiting biological entities in a multitude of medical applications.

  11. Systems-Level Synthetic Biology for Advanced Biofuel Production

    Energy Technology Data Exchange (ETDEWEB)

    Ruffing, Anne [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jensen, Travis J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Strickland, Lucas Marshall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Meserole, Stephen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tallant, David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-03-01

    Cyanobacteria have been shown to be capable of producing a variety of advanced biofuels; however, product yields remain well below those necessary for large scale production. New genetic tools and high throughput metabolic engineering techniques are needed to optimize cyanobacterial metabolisms for enhanced biofuel production. Towards this goal, this project advances the development of a multiple promoter replacement technique for systems-level optimization of gene expression in a model cyanobacterial host: Synechococcus sp. PCC 7002. To realize this multiple-target approach, key capabilities were developed, including a high throughput detection method for advanced biofuels, enhanced transformation efficiency, and genetic tools for Synechococcus sp. PCC 7002. Moreover, several additional obstacles were identified for realization of this multiple promoter replacement technique. The techniques and tools developed in this project will help to enable future efforts in the advancement of cyanobacterial biofuels.

  12. Genome-scale engineering for systems and synthetic biology

    Science.gov (United States)

    Esvelt, Kevin M; Wang, Harris H

    2013-01-01

    Genome-modification technologies enable the rational engineering and perturbation of biological systems. Historically, these methods have been limited to gene insertions or mutations at random or at a few pre-defined locations across the genome. The handful of methods capable of targeted gene editing suffered from low efficiencies, significant labor costs, or both. Recent advances have dramatically expanded our ability to engineer cells in a directed and combinatorial manner. Here, we review current technologies and methodologies for genome-scale engineering, discuss the prospects for extending efficient genome modification to new hosts, and explore the implications of continued advances toward the development of flexibly programmable chasses, novel biochemistries, and safer organismal and ecological engineering. PMID:23340847

  13. Quality detection system and method of micro-accessory based on microscopic vision

    Science.gov (United States)

    Li, Dongjie; Wang, Shiwei; Fu, Yu

    2017-10-01

    Considering that the traditional manual detection of micro-accessory has some problems, such as heavy workload, low efficiency and large artificial error, a kind of quality inspection system of micro-accessory has been designed. Micro-vision technology has been used to inspect quality, which optimizes the structure of the detection system. The stepper motor is used to drive the rotating micro-platform to transfer quarantine device and the microscopic vision system is applied to get graphic information of micro-accessory. The methods of image processing and pattern matching, the variable scale Sobel differential edge detection algorithm and the improved Zernike moments sub-pixel edge detection algorithm are combined in the system in order to achieve a more detailed and accurate edge of the defect detection. The grade at the edge of the complex signal can be achieved accurately by extracting through the proposed system, and then it can distinguish the qualified products and unqualified products with high precision recognition.

  14. Development of real-time radiation exposure dosimetry system using synthetic ruby for interventional radiology

    International Nuclear Information System (INIS)

    Hosokai, Yoshiyuki; Win, Thet Pe; Muroi, Kenzo; Matsumoto, Kenki; Takahashi, Kaito; Usui, Akihito; Saito, Haruo; Kozakai, Masataka

    2017-01-01

    Interventional radiology (IVR) tends to involve long procedures, consequently delivering high radiation doses to the patient. Radiation-induced injuries that occur because of the effect of the high radiation doses are a considerable problem for those performing IVR. For example, skin injuries can include skin erythema if the skin is exposed to radiation doses beyond the threshold level of 2 Gy. One of the reasons for this type of injury is that the local skin dose cannot be monitored in real time. Although there are systems employed to measure the exposure dose, some do not work in real time (such as thermoluminescence dosimeters and fluorescent glass dosimeters), while certain real-time measurement systems that enter the field of view (such as patient skin dosimeters and dosimeters using a nontoxic phosphor) interfere with IVR. However, synthetic ruby has been shown to emit light in response to radiation. The luminous wavelength is 693 nm. It is possible to monitor the radiation dose by detecting the emitted light. However, small synthetic rubies emit a tiny amount of light that is difficult to detect using common systems such as photodiodes. A large enough synthetic ruby to increase the quantity of emitted light would however enter the field of view and interfere with the IVR procedure. Additionally, although a photodiode system could reduce the system size, the data is susceptible to effects from the X-rays and outside temperature. Therefore, use of a sensitive photon counting system as used in nuclear medicine could potentially have a beneficial effect in detecting the weak light signal. A real-time radiation exposure dosimetry system for use in IVR should be sufficiently sensitive, not interfere with the IVR procedure, and ideally have the possibility of development into a system that can provide simultaneous multipoint measurements. This article discusses the development of a realtime radiation exposure dosimetry system for use in IVR that employs a small

  15. Experiments with the Mesoscale Atmospheric Simulation System (MASS) using the synthetic relative humidity

    Science.gov (United States)

    Chang, Chia-Bo

    1994-01-01

    This study is intended to examine the impact of the synthetic relative humidity on the model simulation of mesoscale convective storm environment. The synthetic relative humidity is derived from the National Weather Services surface observations, and non-conventional sources including aircraft, radar, and satellite observations. The latter sources provide the mesoscale data of very high spatial and temporal resolution. The synthetic humidity data is used to complement the National Weather Services rawinsonde observations. It is believed that a realistic representation of initial moisture field in a mesoscale model is critical for the model simulation of thunderstorm development, and the formation of non-convective clouds as well as their effects on the surface energy budget. The impact will be investigated based on a real-data case study using the mesoscale atmospheric simulation system developed by Mesoscale Environmental Simulations Operations, Inc. The mesoscale atmospheric simulation system consists of objective analysis and initialization codes, and the coarse-mesh and fine-mesh dynamic prediction models. Both models are a three dimensional, primitive equation model containing the essential moist physics for simulating and forecasting mesoscale convective processes in the atmosphere. The modeling system is currently implemented at the Applied Meteorology Unit, Kennedy Space Center. Two procedures involving the synthetic relative humidity to define the model initial moisture fields are considered. It is proposed to perform several short-range (approximately 6 hours) comparative coarse-mesh simulation experiments with and without the synthetic data. They are aimed at revealing the model sensitivities should allow us both to refine the specification of the observational requirements, and to develop more accurate and efficient objective analysis schemes. The goal is to advance the MASS (Mesoscal Atmospheric Simulation System) modeling expertise so that the model

  16. Visual system plasticity in mammals: the story of monocular enucleation-induced vision loss

    Science.gov (United States)

    Nys, Julie; Scheyltjens, Isabelle; Arckens, Lutgarde

    2015-01-01

    The groundbreaking work of Hubel and Wiesel in the 1960’s on ocular dominance plasticity instigated many studies of the visual system of mammals, enriching our understanding of how the development of its structure and function depends on high quality visual input through both eyes. These studies have mainly employed lid suturing, dark rearing and eye patching applied to different species to reduce or impair visual input, and have created extensive knowledge on binocular vision. However, not all aspects and types of plasticity in the visual cortex have been covered in full detail. In that regard, a more drastic deprivation method like enucleation, leading to complete vision loss appears useful as it has more widespread effects on the afferent visual pathway and even on non-visual brain regions. One-eyed vision due to monocular enucleation (ME) profoundly affects the contralateral retinorecipient subcortical and cortical structures thereby creating a powerful means to investigate cortical plasticity phenomena in which binocular competition has no vote.In this review, we will present current knowledge about the specific application of ME as an experimental tool to study visual and cross-modal brain plasticity and compare early postnatal stages up into adulthood. The structural and physiological consequences of this type of extensive sensory loss as documented and studied in several animal species and human patients will be discussed. We will summarize how ME studies have been instrumental to our current understanding of the differentiation of sensory systems and how the structure and function of cortical circuits in mammals are shaped in response to such an extensive alteration in experience. In conclusion, we will highlight future perspectives and the clinical relevance of adding ME to the list of more longstanding deprivation models in visual system research. PMID:25972788

  17. Development of yarn breakage detection software system based on machine vision

    Science.gov (United States)

    Wang, Wenyuan; Zhou, Ping; Lin, Xiangyu

    2017-10-01

    For questions spinning mills and yarn breakage cannot be detected in a timely manner, and save the cost of textile enterprises. This paper presents a software system based on computer vision for real-time detection of yarn breakage. The system and Windows8.1 system Tablet PC, cloud server to complete the yarn breakage detection and management. Running on the Tablet PC software system is designed to collect yarn and location information for analysis and processing. And will be processed after the information through the Wi-Fi and http protocol sent to the cloud server to store in the Microsoft SQL2008 database. In order to follow up on the yarn break information query and management. Finally sent to the local display on time display, and remind the operator to deal with broken yarn. The experimental results show that the system of missed test rate not more than 5%o, and no error detection.

  18. Gesture Therapy: A Vision-Based System for Arm Rehabilitation after Stroke

    Science.gov (United States)

    Sucar, L. Enrique; Azcárate, Gildardo; Leder, Ron S.; Reinkensmeyer, David; Hernández, Jorge; Sanchez, Israel; Saucedo, Pedro

    Each year millions of people in the world survive a stroke, in the U.S. alone the figure is over 600,000 people per year. Movement impairments after stroke are typically treated with intensive, hands-on physical and occupational therapy for several weeks after the initial injury. However, due to economic pressures, stroke patients are receiving less therapy and going home sooner, so the potential benefit of the therapy is not completely realized. Thus, it is important to develop rehabilitation technology that allows individuals who had suffered a stroke to practice intensive movement training without the expense of an always-present therapist. Current solutions are too expensive, as they require a robotic system for rehabilitation. We have developed a low-cost, computer vision system that allows individuals with stroke to practice arm movement exercises at home or at the clinic, with periodic interactions with a therapist. The system integrates a web based virtual environment for facilitating repetitive movement training, with state-of-the art computer vision algorithms that track the hand of a patient and obtain its 3-D coordinates, using two inexpensive cameras and a conventional personal computer. An initial prototype of the system has been evaluated in a pilot clinical study with promising results.

  19. Diagnosis System for Diabetic Retinopathy and Glaucoma Screening to Prevent Vision Loss

    Directory of Open Access Journals (Sweden)

    Siva Sundhara Raja DHANUSHKODI

    2014-03-01

    Full Text Available Aim: Diabetic retinopathy (DR and glaucoma are two most common retinal disorders that are major causes of blindness in diabetic patients. DR caused in retinal images due to the damage in retinal blood vessels, which leads to the formation of hemorrhages spread over the entire region of retina. Glaucoma is caused due to hypertension in diabetic patients. Both DR and glaucoma affects the vision loss in diabetic patients. Hence, a computer aided development of diagnosis system for Diabetic retinopathy and Glaucoma screening is proposed in this paper to prevent vision loss. Method: The diagnosis system of DR consists of two stages namely detection and segmentation of fovea and hemorrhages. The diagnosis system of glaucoma screening consists of three stages namely blood vessel segmentation, Extraction of optic disc (OD and optic cup (OC region and determination of rim area between OD and OC. Results: The specificity and accuracy for hemorrhages detection is found to be 98.47% and 98.09% respectively. The accuracy for OD detection is found to be 99.3%. This outperforms state-of-the-art methods. Conclusion: In this paper, the diagnosis system is developed to classify the DR and glaucoma screening in to mild, moderate and severe respectively.

  20. Invention and Application of Synthetic Experiment System of Machine Equipment Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Hong-Yu LIU

    2014-10-01

    Full Text Available All kinds of faults were engendered during machine equipment working process. Diagnosing them accurately has important significance in actual production. The invention and manufacturing of the synthetic experiment system of machine equipment fault diagnosis filled in the blank of this kind of experiment equipment in China and obtained national practical new type patent. By the motor speed regulation system, machine equipment fault imitation system, measuring and monitoring system and analysis and diagnosis system of the synthetic experiment system, students can regulate motor speed arbitrarily, imitate multi-kinds of machine equipment parts fault, collect the signals of acceleration, speed, displacement, force and temperature and make multi-kinds of time field, frequency field and figure analysis. The application of the synthetic experiment system in our university’s teaching practice has obtained good effect on fostering professional eligibility in measuring, monitoring and fault diagnosis of machine equipment. If the systematic software was installed in portable computer, user can fulfill measuring, monitoring, signal processing and fault diagnosis on multi- kinds of field machine equipment conveniently. In this paper, the three dimensions waterfall spectrum matrix analysis was made on two compact mesh gears. Energy attenuation analysis was made on vibration signal. Wavelet analysis was made on bearing fault.

  1. ROAD INTERPRETATION FOR DRIVER ASSISTANCE BASED ON AN EARLY COGNITIVE VISION SYSTEM

    DEFF Research Database (Denmark)

    Baseski, Emre; Jensen, Lars Baunegaard With; Pugeault, Nicolas

    2009-01-01

    In this work, we address the problem of road interpretation for driver assistance based on an early cognitive vision system. The structure of a road and the relevant traffic are interpreted in terms of ego-motion estimation of the car, independently moving objects on the road, lane markers and large...... scale maps of the road. We make use of temporal and spatial disambiguation mechanisms to increase the reliability of visually extracted 2D and 3D information. This information is then used to interpret the layout of the road by using lane markers that are detected via Bayesian reasoning. We also...

  2. The use of contact lens telescopic systems in low vision rehabilitation.

    Science.gov (United States)

    Vincent, Stephen J

    2017-06-01

    Refracting telescopes are afocal compound optical systems consisting of two lenses that produce an apparent magnification of the retinal image. They are routinely used in visual rehabilitation in the form of monocular or binocular hand held low vision aids, and head or spectacle-mounted devices to improve distance visual acuity, and with slight modifications, to enhance acuity for near and intermediate tasks. Since the advent of ground glass haptic lenses in the 1930's, contact lenses have been employed as a useful refracting element of telescopic systems; primarily as a mobile ocular lens (the eyepiece), that moves with the eye. Telescopes which incorporate a contact lens eyepiece significantly improve the weight, comesis, and field of view compared to traditional spectacle-mounted telescopes, in addition to potential related psycho-social benefits. This review summarises the underlying optics and use of contact lenses to provide telescopic magnification from the era of Descartes, to Dallos, and the present day. The limitations and clinical challenges associated with such devices are discussed, along with the potential future use of reflecting telescopes incorporated within scleral lenses and tactile contact lens systems in low vision rehabilitation. Copyright © 2017 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  3. Low Vision

    Science.gov (United States)

    ... Prevalence Rates for Low Vision by Age, and Race/Ethnicity Table for 2010 U.S. Age-Specific Prevalence ... Ethnicity 2010 Prevalence Rates of Low Vision by Race Table for 2010 Prevalence Rates of Low Vision ...

  4. A novel vision-based mold monitoring system in an environment of intense vibration

    International Nuclear Information System (INIS)

    Hu, Fen; He, Zaixing; Zhao, Xinyue; Zhang, Shuyou

    2017-01-01

    Mold monitoring has been more and more widely used in the modern manufacturing industry, especially when based on machine vision, but these systems cannot meet the detection speed and accuracy requirements for mold monitoring because they must operate in environments that exhibit intense vibration during production. To ensure that the system runs accurately and efficiently, we propose a new descriptor that combines the geometric relationship-based global context feature and the local scale-invariant feature transform for the image registration step of the mold monitoring system. The experimental results of four types of molds showed that the detection accuracy of the mold monitoring system is improved in the environment with intense vibration. (paper)

  5. A novel vision-based mold monitoring system in an environment of intense vibration

    Science.gov (United States)

    Hu, Fen; He, Zaixing; Zhao, Xinyue; Zhang, Shuyou

    2017-10-01

    Mold monitoring has been more and more widely used in the modern manufacturing industry, especially when based on machine vision, but these systems cannot meet the detection speed and accuracy requirements for mold monitoring because they must operate in environments that exhibit intense vibration during production. To ensure that the system runs accurately and efficiently, we propose a new descriptor that combines the geometric relationship-based global context feature and the local scale-invariant feature transform for the image registration step of the mold monitoring system. The experimental results of four types of molds showed that the detection accuracy of the mold monitoring system is improved in the environment with intense vibration.

  6. A simple machine vision-driven system for measuring optokinetic reflex in small animals.

    Science.gov (United States)

    Shirai, Yoshihiro; Asano, Kenta; Takegoshi, Yoshihiro; Uchiyama, Shu; Nonobe, Yuki; Tabata, Toshihide

    2013-09-01

    The optokinetic reflex (OKR) is useful to monitor the function of the visual and motor nervous systems. However, OKR measurement is not open to all because dedicated commercial equipment or detailed instructions for building in-house equipment is rarely offered. Here we describe the design of an easy-to-install/use yet reliable OKR measuring system including a computer program to visually locate the pupil and a mathematical procedure to estimate the pupil azimuth from the location data. The pupil locating program was created on a low-cost machine vision development platform, whose graphical user interface allows one to compose and operate the program without programming expertise. Our system located mouse pupils at a high success rate (~90 %), estimated their azimuth precisely (~94 %), and detected changes in OKR gain due to the pharmacological modulation of the cerebellar flocculi. The system would promote behavioral assessment in physiology, pharmacology, and genetics.

  7. Sender-receiver systems and applying information theory for quantitative synthetic biology.

    Science.gov (United States)

    Barcena Menendez, Diego; Senthivel, Vivek Raj; Isalan, Mark

    2015-02-01

    Sender-receiver (S-R) systems abound in biology, with communication systems sending information in various forms. Information theory provides a quantitative basis for analysing these processes and is being applied to study natural genetic, enzymatic and neural networks. Recent advances in synthetic biology are providing us with a wealth of artificial S-R systems, giving us quantitative control over networks with a finite number of well-characterised components. Combining the two approaches can help to predict how to maximise signalling robustness, and will allow us to make increasingly complex biological computers. Ultimately, pushing the boundaries of synthetic biology will require moving beyond engineering the flow of information and towards building more sophisticated circuits that interpret biological meaning. Copyright © 2014. Published by Elsevier Ltd.

  8. Vision Lab

    Data.gov (United States)

    Federal Laboratory Consortium — The Vision Lab personnel perform research, development, testing and evaluation of eye protection and vision performance. The lab maintains and continues to develop...

  9. Computer vision for an autonomous mobile robot

    CSIR Research Space (South Africa)

    Withey, Daniel J

    2015-10-01

    Full Text Available Computer vision systems are essential for practical, autonomous, mobile robots – machines that employ artificial intelligence and control their own motion within an environment. As with biological systems, computer vision systems include the vision...

  10. Cosmo Cassette: A Microfluidic Microgravity Microbial System For Synthetic Biology Unit Tests and Satellite Missions

    Science.gov (United States)

    Berliner, Aaron J.

    2013-01-01

    Although methods in the design-build-test life cycle of the synthetic biology field have grown rapidly, the expansion has been non-uniform. The design and build stages in development have seen innovations in the form of biological CAD and more efficient means for building DNA, RNA, and other biological constructs. The testing phase of the cycle remains in need of innovation. Presented will be both a theoretical abstraction of biological measurement and a practical demonstration of a microfluidics-based platform for characterizing synthetic biological phenomena. Such a platform demonstrates a design of additive manufacturing (3D printing) for construction of a microbial fuel cell (MFC) to be used in experiments carried out in space. First, the biocompatibility of the polypropylene chassis will be demonstrated. The novel MFCs will be cheaper, and faster to make and iterate through designs. The novel design will contain a manifold switchingdistribution system and an integrated in-chip set of reagent reservoirs fabricated via 3D printing. The automated nature of the 3D printing yields itself to higher resolution switching valves and leads to smaller sized payloads, lower cost, reduced power and a standardized platform for synthetic biology unit tests on Earth and in space. It will be demonstrated that the application of unit testing in synthetic biology will lead to the automatic construction and validation of desired constructs. Unit testing methodologies offer benefits of preemptive problem identification, change of facility, simplicity of integration, ease of documentation, and separation of interface from implementation, and automated design.

  11. Rapid, computer vision-enabled murine screening system identifies neuropharmacological potential of two new mechanisms

    Directory of Open Access Journals (Sweden)

    Steven L Roberds

    2011-09-01

    Full Text Available The lack of predictive in vitro models for behavioral phenotypes impedes rapid advancement in neuropharmacology and psychopharmacology. In vivo behavioral assays are more predictive of activity in human disorders, but such assays are often highly resource-intensive. Here we describe the successful application of a computer vision-enabled system to identify potential neuropharmacological activity of two new mechanisms. The analytical system was trained using multiple drugs that are used clinically to treat depression, schizophrenia, anxiety, and other psychiatric or behavioral disorders. During blinded testing the PDE10 inhibitor TP-10 produced a signature of activity suggesting potential antipsychotic activity. This finding is consistent with TP-10’s activity in multiple rodent models that is similar to that of clinically used antipsychotic drugs. The CK1ε inhibitor PF-670462 produced a signature consistent with anxiolytic activity and, at the highest dose tested, behavioral effects similar to that of opiate analgesics. Neither TP-10 nor PF-670462 was included in the training set. Thus, computer vision-based behavioral analysis can facilitate drug discovery by identifying neuropharmacological effects of compounds acting through new mechanisms.

  12. Inverse Modeling of Human Knee Joint Based on Geometry and Vision Systems for Exoskeleton Applications

    Directory of Open Access Journals (Sweden)

    Eduardo Piña-Martínez

    2015-01-01

    Full Text Available Current trends in Robotics aim to close the gap that separates technology and humans, bringing novel robotic devices in order to improve human performance. Although robotic exoskeletons represent a breakthrough in mobility enhancement, there are design challenges related to the forces exerted to the users’ joints that result in severe injuries. This occurs due to the fact that most of the current developments consider the joints as noninvariant rotational axes. This paper proposes the use of commercial vision systems in order to perform biomimetic joint design for robotic exoskeletons. This work proposes a kinematic model based on irregular shaped cams as the joint mechanism that emulates the bone-to-bone joints in the human body. The paper follows a geometric approach for determining the location of the instantaneous center of rotation in order to design the cam contours. Furthermore, the use of a commercial vision system is proposed as the main measurement tool due to its noninvasive feature and for allowing subjects under measurement to move freely. The application of this method resulted in relevant information about the displacements of the instantaneous center of rotation at the human knee joint.

  13. Cost-Effective Video Filtering Solution for Real-Time Vision Systems

    Directory of Open Access Journals (Sweden)

    Karl Martin

    2005-08-01

    Full Text Available This paper presents an efficient video filtering scheme and its implementation in a field-programmable logic device (FPLD. Since the proposed nonlinear, spatiotemporal filtering scheme is based on order statistics, its efficient implementation benefits from a bit-serial realization. The utilization of both the spatial and temporal correlation characteristics of the processed video significantly increases the computational demands on this solution, and thus, implementation becomes a significant challenge. Simulation studies reported in this paper indicate that the proposed pipelined bit-serial FPLD filtering solution can achieve speeds of up to 97.6 Mpixels/s and consumes 1700 to 2700 logic cells for the speed-optimized and area-optimized versions, respectively. Thus, the filter area represents only 6.6 to 10.5% of the Altera STRATIX EP1S25 device available on the Altera Stratix DSP evaluation board, which has been used to implement a prototype of the entire real-time vision system. As such, the proposed adaptive video filtering scheme is both practical and attractive for real-time machine vision and surveillance systems as well as conventional video and multimedia applications.

  14. A Vision-Based System for Intelligent Monitoring: Human Behaviour Analysis and Privacy by Context

    Directory of Open Access Journals (Sweden)

    Alexandros Andre Chaaraoui

    2014-05-01

    Full Text Available Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people’s behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services.

  15. A vision-based system for intelligent monitoring: human behaviour analysis and privacy by context.

    Science.gov (United States)

    Chaaraoui, Alexandros Andre; Padilla-López, José Ramón; Ferrández-Pastor, Francisco Javier; Nieto-Hidalgo, Mario; Flórez-Revuelta, Francisco

    2014-05-20

    Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people's behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services.

  16. Optimization of spatial light distribution through genetic algorithms for vision systems applied to quality control

    International Nuclear Information System (INIS)

    Castellini, P; Cecchini, S; Stroppa, L; Paone, N

    2015-01-01

    The paper presents an adaptive illumination system for image quality enhancement in vision-based quality control systems. In particular, a spatial modulation of illumination intensity is proposed in order to improve image quality, thus compensating for different target scattering properties, local reflections and fluctuations of ambient light. The desired spatial modulation of illumination is obtained by a digital light projector, used to illuminate the scene with an arbitrary spatial distribution of light intensity, designed to improve feature extraction in the region of interest. The spatial distribution of illumination is optimized by running a genetic algorithm. An image quality estimator is used to close the feedback loop and to stop iterations once the desired image quality is reached. The technique proves particularly valuable for optimizing the spatial illumination distribution in the region of interest, with the remarkable capability of the genetic algorithm to adapt the light distribution to very different target reflectivity and ambient conditions. The final objective of the proposed technique is the improvement of the matching score in the recognition of parts through matching algorithms, hence of the diagnosis of machine vision-based quality inspections. The procedure has been validated both by a numerical model and by an experimental test, referring to a significant problem of quality control for the washing machine manufacturing industry: the recognition of a metallic clamp. Its applicability to other domains is also presented, specifically for the visual inspection of shoes with retro-reflective tape and T-shirts with paillettes. (paper)

  17. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles.

    Science.gov (United States)

    Huang, Kuo-Lung; Chiu, Chung-Cheng; Chiu, Sheng-Yi; Teng, Yao-Jen; Hao, Shu-Sheng

    2015-07-13

    The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs) has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft's nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft's nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path.

  18. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Kuo-Lung Huang

    2015-07-01

    Full Text Available The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft’s nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft’s nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path.

  19. Data Fusion for a Vision-Radiological System for Source Tracking and Discovery

    International Nuclear Information System (INIS)

    Enqvist, Andreas; Koppal, Sanjeev

    2015-01-01

    A multidisciplinary approach to allow the tracking of the movement of radioactive sources by fusing data from multiple radiological and visual sensors is under development. The goal is to improve the ability to detect, locate, track and identify nuclear/radiological threats. The key concept is that such widely available visual and depth sensors can impact radiological detection, since the intensity fall-off in the count rate can be correlated to movement in three dimensions. To enable this, we pose an important question; what is the right combination of sensing modalities and vision algorithms that can best compliment a radiological sensor, for the purpose of detection and tracking of radioactive material? Similarly what is the best radiation detection methods and unfolding algorithms suited for data fusion with tracking data? Data fusion of multi-sensor data for radiation detection have seen some interesting developments lately. Significant examples include intelligent radiation sensor systems (IRSS), which are based on larger numbers of distributed similar or identical radiation sensors coupled with position data for network capable to detect and locate radiation source. Other developments are gamma-ray imaging systems based on Compton scatter in segmented detector arrays. Similar developments using coded apertures or scatter cameras for neutrons have recently occurred. The main limitation of such systems is not so much in their capability but rather in their complexity and cost which is prohibitive for large scale deployment. Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development on two separate calibration algorithms for characterizing the fused sensor system. The deviation from a simple inverse square-root fall-off of radiation intensity is explored and

  20. Experimental Machine Vision System for Training Students in Virtual Instrumentation Techniques

    Directory of Open Access Journals (Sweden)

    Rodica Holonec

    2011-10-01

    Full Text Available The aim of this paper is to present the main techniques in designing and building of a complex machine vision system in order to train electrical engineering students in using virtual instrumentation. The proposed test bench realizes an automatic adjustment of some electrical circuit parameters on a belt conveyer. The students can learn how to combine mechanics, electronics, electrical engineering, image acquisition and processing in order to solve the proposed application. After the system implementation the students are asked to present in which way they can modify or extend the system for industrial environment regarding the automatic adjustment of electric parameters or the calibration of different type of sensors (of distance, of proximity, etc without the intervention of the human factor in the process.

  1. Cognitive vision system for control of dexterous prosthetic hands: Experimental evaluation

    Directory of Open Access Journals (Sweden)

    Došen Strahinja

    2010-08-01

    Full Text Available Abstract Background Dexterous prosthetic hands that were developed recently, such as SmartHand and i-LIMB, are highly sophisticated; they have individually controllable fingers and the thumb that is able to abduct/adduct. This flexibility allows implementation of many different grasping strategies, but also requires new control algorithms that can exploit the many degrees of freedom available. The current study presents and tests the operation of a new control method for dexterous prosthetic hands. Methods The central component of the proposed method is an autonomous controller comprising a vision system with rule-based reasoning mounted on a dexterous hand (CyberHand. The controller, termed cognitive vision system (CVS, mimics biological control and generates commands for prehension. The CVS was integrated into a hierarchical control structure: 1 the user triggers the system and controls the orientation of the hand; 2 a high-level controller automatically selects the grasp type and size; and 3 an embedded hand controller implements the selected grasp using closed-loop position/force control. The operation of the control system was tested in 13 healthy subjects who used Cyberhand, attached to the forearm, to grasp and transport 18 objects placed at two different distances. Results The system correctly estimated grasp type and size (nine commands in total in about 84% of the trials. In an additional 6% of the trials, the grasp type and/or size were different from the optimal ones, but they were still good enough for the grasp to be successful. If the control task was simplified by decreasing the number of possible commands, the classification accuracy increased (e.g., 93% for guessing the grasp type only. Conclusions The original outcome of this research is a novel controller empowered by vision and reasoning and capable of high-level analysis (i.e., determining object properties and autonomous decision making (i.e., selecting the grasp type and

  2. Precision calibration method for binocular vision measurement systems based on arbitrary translations and 3D-connection information

    International Nuclear Information System (INIS)

    Yang, Jinghao; Jia, Zhenyuan; Liu, Wei; Fan, Chaonan; Xu, Pengtao; Wang, Fuji; Liu, Yang

    2016-01-01

    Binocular vision systems play an important role in computer vision, and high-precision system calibration is a necessary and indispensable process. In this paper, an improved calibration method for binocular stereo vision measurement systems based on arbitrary translations and 3D-connection information is proposed. First, a new method for calibrating the intrinsic parameters of binocular vision system based on two translations with an arbitrary angle difference is presented, which reduces the effect of the deviation of the motion actuator on calibration accuracy. This method is simpler and more accurate than existing active-vision calibration methods and can provide a better initial value for the determination of extrinsic parameters. Second, a 3D-connection calibration and optimization method is developed that links the information of the calibration target in different positions, further improving the accuracy of the system calibration. Calibration experiments show that the calibration error can be reduced to 0.09%, outperforming traditional methods for the experiments of this study. (paper)

  3. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems.

    Science.gov (United States)

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-12-17

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  4. Integrative systems and synthetic biology of cell-matrix adhesion sites.

    Science.gov (United States)

    Zamir, Eli

    2016-09-02

    The complexity of cell-matrix adhesion convolves its roles in the development and functioning of multicellular organisms and their evolutionary tinkering. Cell-matrix adhesion is mediated by sites along the plasma membrane that anchor the actin cytoskeleton to the matrix via a large number of proteins, collectively called the integrin adhesome. Fundamental challenges for understanding how cell-matrix adhesion sites assemble and function arise from their multi-functionality, rapid dynamics, large number of components and molecular diversity. Systems biology faces these challenges in its strive to understand how the integrin adhesome gives rise to functional adhesion sites. Synthetic biology enables engineering intracellular modules and circuits with properties of interest. In this review I discuss some of the fundamental questions in systems biology of cell-matrix adhesion and how synthetic biology can help addressing them.

  5. Does Prescribed Randomness Hold the Key to Interface Synthetic and Natural Systems?

    Science.gov (United States)

    Xu, Ting

    The bottlenecks to engineering biomimetic functional materials are not only to duplicate hierarchical structures, but also to manipulate the system dynamics. Bio-inspired responsive materials have been investigated extensively within the past few decades with much success. Yet, the level of control of these complex systems is still rather simplistic. More importantly, we have yet to uncover the design rules to synergize natural and synthetic building blocks that allows us to go beyond just a few specific families of natural building blocks. I am going to discuss our recent studies that demonstrated the feasibility to develop synthetic protein-like polymers that can interface with natural proteins and biomachinaries. Rational design of these protein-like polymers thus opens a viable approach toward functional materials based on natural components. The work is supported by DOD-ARO W911NF-16-1-0405.

  6. Optimizing a Synthetic Signaling System, Using Mathematical Modeling to Direct Experimental Work

    Science.gov (United States)

    2014-09-05

    function across kingdoms . Mizuno et al demonstrated conservation of function of HK proteins by expressing the plant hormone receptor AHK4 in E. coli...signaling system in planta , it will advance the current state of plant synthetic biology by providing a new tool to the community: prokaryotic testing...endogenous clock. Planta 216, 1-16, doi:10.1007/s00425- 002-0831-4 (2002). 49 Aoyama, T. & Chua, N.-H. A glucocorticoid-mediated transcriptional induction

  7. Specifications of Standards in Systems and Synthetic Biology: Status and Developments in 2017.

    Science.gov (United States)

    Schreiber, Falk; Bader, Gary D; Gleeson, Padraig; Golebiewski, Martin; Hucka, Michael; Keating, Sarah M; Novère, Nicolas Le; Myers, Chris; Nickerson, David; Sommer, Björn; Waltemath, Dagmar

    2018-03-29

    Standards are essential to the advancement of Systems and Synthetic Biology. COMBINE provides a formal body and a centralised platform to help develop and disseminate relevant standards and related resources. The regular special issue of the Journal of Integrative Bioinformatics aims to support the exchange, distribution and archiving of these standards by providing unified, easily citable access. This paper provides an overview of existing COMBINE standards and presents developments of the last year.

  8. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    Science.gov (United States)

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  9. A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture

    Directory of Open Access Journals (Sweden)

    Jenq-Haur Wang

    2012-02-01

    Full Text Available This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  10. Machine vision system: a tool for quality inspection of food and agricultural products.

    Science.gov (United States)

    Patel, Krishna Kumar; Kar, A; Jha, S N; Khan, M A

    2012-04-01

    Quality inspection of food and agricultural produce are difficult and labor intensive. Simultaneously, with increased expectations for food products of high quality and safety standards, the need for accurate, fast and objective quality determination of these characteristics in food products continues to grow. However, these operations generally in India are manual which is costly as well as unreliable because human decision in identifying quality factors such as appearance, flavor, nutrient, texture, etc., is inconsistent, subjective and slow. Machine vision provides one alternative for an automated, non-destructive and cost-effective technique to accomplish these requirements. This inspection approach based on image analysis and processing has found a variety of different applications in the food industry. Considerable research has highlighted its potential for the inspection and grading of fruits and vegetables, grain quality and characteristic examination and quality evaluation of other food products like bakery products, pizza, cheese, and noodles etc. The objective of this paper is to provide in depth introduction of machine vision system, its components and recent work reported on food and agricultural produce.

  11. A Vision for Co-optimized T&D System Interaction with Renewables and Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, C. Lindsay [Cornell Univ., Ithaca, NY (United States); Zéphyr, Luckny [Cornell Univ., Ithaca, NY (United States); Liu, Jialin [Cornell Univ., Ithaca, NY (United States); Cardell, Judith B. [Smith College, Northampton MA (United States)

    2017-01-07

    The evolution of the power system to the reliable, effi- cient and sustainable system of the future will involve development of both demand- and supply-side technology and operations. The use of demand response to counterbalance the intermittency of re- newable generation brings the consumer into the spotlight. Though individual consumers are interconnected at the low-voltage distri- bution system, these resources are typically modeled as variables at the transmission network level. In this paper, a vision for co- optimized interaction of distribution systems, or microgrids, with the high-voltage transmission system is described. In this frame- work, microgrids encompass consumers, distributed renewables and storage. The energy management system of the microgrid can also sell (buy) excess (necessary) energy from the transmission system. Preliminary work explores price mechanisms to manage the microgrid and its interactions with the transmission system. Wholesale market operations are addressed through the devel- opment of scalable stochastic optimization methods that provide the ability to co-optimize interactions between the transmission and distribution systems. Modeling challenges of the co-optimization are addressed via solution methods for large-scale stochastic op- timization, including decomposition and stochastic dual dynamic programming.

  12. A Vision for Co-optimized T&D System Interaction with Renewables and Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Lindsay [Cornell Univ., Ithaca, NY (United States); Zéphyr, Luckny [Cornell Univ., Ithaca, NY (United States); Cardell, Judith B. [Smith College, Northampton, MA (United States)

    2017-01-06

    The evolution of the power system to the reliable, efficient and sustainable system of the future will involve development of both demand- and supply-side technology and operations. The use of demand response to counterbalance the intermittency of renewable generation brings the consumer into the spotlight. Though individual consumers are interconnected at the low-voltage distribution system, these resources are typically modeled as variables at the transmission network level. In this paper, a vision for cooptimized interaction of distribution systems, or microgrids, with the high-voltage transmission system is described. In this framework, microgrids encompass consumers, distributed renewables and storage. The energy management system of the microgrid can also sell (buy) excess (necessary) energy from the transmission system. Preliminary work explores price mechanisms to manage the microgrid and its interactions with the transmission system. Wholesale market operations are addressed through the development of scalable stochastic optimization methods that provide the ability to co-optimize interactions between the transmission and distribution systems. Modeling challenges of the co-optimization are addressed via solution methods for large-scale stochastic optimization, including decomposition and stochastic dual dynamic programming.

  13. Simulation of Specular Surface Imaging Based on Computer Graphics: Application on a Vision Inspection System

    Directory of Open Access Journals (Sweden)

    Seulin Ralph

    2002-01-01

    Full Text Available This work aims at detecting surface defects on reflecting industrial parts. A machine vision system, performing the detection of geometric aspect surface defects, is completely described. The revealing of defects is realized by a particular lighting device. It has been carefully designed to ensure the imaging of defects. The lighting system simplifies a lot the image processing for defect segmentation and so a real-time inspection of reflective products is possible. To bring help in the conception of imaging conditions, a complete simulation is proposed. The simulation, based on computer graphics, enables the rendering of realistic images. Simulation provides here a very efficient way to perform tests compared to the numerous attempts of manual experiments.

  14. Vision-based measuring system for rider's pose estimation during motorcycle riding

    Science.gov (United States)

    Cheli, F.; Mazzoleni, P.; Pezzola, M.; Ruspini, E.; Zappa, E.

    2013-07-01

    Inertial characteristics of the human body are comparable with the vehicle ones in motorbike riding: the study of a rider's dynamic is a crucial step in system modeling. An innovative vision based system able to measure the six degrees of freedom of the driver with respect to the vehicle is proposed here: the core of the proposed approach is an image acquisition and processing technique capable of reconstructing the position and orientation of a target fixed on the rider's back. The technique is firstly validated in laboratory tests comparing measured and imposed target motion laws and successively tested in a real case scenario during track tests with amateur and professional drivers. The presented results show the capability of the technique to correctly describe the driver's dynamic, his interaction with the vehicle as well as the possibility to use the new measuring technique in the comparison of different driving styles.

  15. Computer vision system for egg volume prediction using backpropagation neural network

    Science.gov (United States)

    Siswantoro, J.; Hilman, M. Y.; Widiasri, M.

    2017-11-01

    Volume is one of considered aspects in egg sorting process. A rapid and accurate volume measurement method is needed to develop an egg sorting system. Computer vision system (CVS) provides a promising solution for volume measurement problem. Artificial neural network (ANN) has been used to predict the volume of egg in several CVSs. However, volume prediction from ANN could have less accuracy due to inappropriate input features or inappropriate ANN structure. This paper proposes a CVS for predicting the volume of egg using ANN. The CVS acquired an image of egg from top view and then processed the image to extract its 1D and 2 D size features. The features were used as input for ANN in predicting the volume of egg. The experiment results show that the proposed CSV can predict the volume of egg with a good accuracy and less computation time.

  16. Automated egg grading system using computer vision: Investigation on weight measure versus shape parameters

    Science.gov (United States)

    Nasir, Ahmad Fakhri Ab; Suhaila Sabarudin, Siti; Majeed, Anwar P. P. Abdul; Ghani, Ahmad Shahrizan Abdul

    2018-04-01

    Chicken egg is a source of food of high demand by humans. Human operators cannot work perfectly and continuously when conducting egg grading. Instead of an egg grading system using weight measure, an automatic system for egg grading using computer vision (using egg shape parameter) can be used to improve the productivity of egg grading. However, early hypothesis has indicated that more number of egg classes will change when using egg shape parameter compared with using weight measure. This paper presents the comparison of egg classification by the two above-mentioned methods. Firstly, 120 images of chicken eggs of various grades (A–D) produced in Malaysia are captured. Then, the egg images are processed using image pre-processing techniques, such as image cropping, smoothing and segmentation. Thereafter, eight egg shape features, including area, major axis length, minor axis length, volume, diameter and perimeter, are extracted. Lastly, feature selection (information gain ratio) and feature extraction (principal component analysis) are performed using k-nearest neighbour classifier in the classification process. Two methods, namely, supervised learning (using weight measure as graded by egg supplier) and unsupervised learning (using egg shape parameters as graded by ourselves), are conducted to execute the experiment. Clustering results reveal many changes in egg classes after performing shape-based grading. On average, the best recognition results using shape-based grading label is 94.16% while using weight-based label is 44.17%. As conclusion, automated egg grading system using computer vision is better by implementing shape-based features since it uses image meanwhile the weight parameter is more suitable by using weight grading system.

  17. Vision based interface system for hands free control of an intelligent wheelchair

    Directory of Open Access Journals (Sweden)

    Kim Eun

    2009-08-01

    Full Text Available Abstract Background Due to the shift of the age structure in today's populations, the necessities for developing the devices or technologies to support them have been increasing. Traditionally, the wheelchair, including powered and manual ones, is the most popular and important rehabilitation/assistive device for the disabled and the elderly. However, it is still highly restricted especially for severely disabled. As a solution to this, the Intelligent Wheelchairs (IWs have received considerable attention as mobility aids. The purpose of this work is to develop the IW interface for providing more convenient and efficient interface to the people the disability in their limbs. Methods This paper proposes an intelligent wheelchair (IW control system for the people with various disabilities. To facilitate a wide variety of user abilities, the proposed system involves the use of face-inclination and mouth-shape information, where the direction of an IW is determined by the inclination of the user's face, while proceeding and stopping are determined by the shapes of the user's mouth. Our system is composed of electric powered wheelchair, data acquisition board, ultrasonic/infra-red sensors, a PC camera, and vision system. Then the vision system to analyze user's gestures is performed by three stages: detector, recognizer, and converter. In the detector, the facial region of the intended user is first obtained using Adaboost, thereafter the mouth region is detected based on edge information. The extracted features are sent to the recognizer, which recognizes the face inclination and mouth shape using statistical analysis and K-means clustering, respectively. These recognition results are then delivered to the converter to control the wheelchair. Result & conclusion The advantages of the proposed system include 1 accurate recognition of user's intention with minimal user motion and 2 robustness to a cluttered background and the time-varying illumination

  18. Smart-DS: Synthetic Models for Advanced, Realistic Testing: Distribution Systems and Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Krishnan, Venkat K [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Palmintier, Bryan S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hale, Elaine T [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Elgindy, Tarek [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bugbee, Bruce [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Rossol, Michael N [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Lopez, Anthony J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnamurthy, Dheepak [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Vergara, Claudio [MIT; Domingo, Carlos Mateo [IIT Comillas; Postigo, Fernando [IIT Comillas; de Cuadra, Fernando [IIT Comillas; Gomez, Tomas [IIT Comillas; Duenas, Pablo [MIT; Luke, Max [MIT; Li, Vivian [MIT; Vinoth, Mohan [GE Grid Solutions; Kadankodu, Sree [GE Grid Solutions

    2017-08-09

    The National Renewable Energy Laboratory (NREL) in collaboration with Massachusetts Institute of Technology (MIT), Universidad Pontificia Comillas (Comillas-IIT, Spain) and GE Grid Solutions, is working on an ARPA-E GRID DATA project, titled Smart-DS, to create: 1) High-quality, realistic, synthetic distribution network models, and 2) Advanced tools for automated scenario generation based on high-resolution weather data and generation growth projections. Through these advancements, the Smart-DS project is envisioned to accelerate the development, testing, and adoption of advanced algorithms, approaches, and technologies for sustainable and resilient electric power systems, especially in the realm of U.S. distribution systems. This talk will present the goals and overall approach of the Smart-DS project, including the process of creating the synthetic distribution datasets using reference network model (RNM) and the comprehensive validation process to ensure network realism, feasibility, and applicability to advanced use cases. The talk will provide demonstrations of early versions of synthetic models, along with the lessons learnt from expert engagements to enhance future iterations. Finally, the scenario generation framework, its development plans, and co-ordination with GRID DATA repository teams to house these datasets for public access will also be discussed.

  19. Systems and synthetic biology approaches to alter plant cell walls and reduce biomass recalcitrance.

    Science.gov (United States)

    Kalluri, Udaya C; Yin, Hengfu; Yang, Xiaohan; Davison, Brian H

    2014-12-01

    Fine-tuning plant cell wall properties to render plant biomass more amenable to biofuel conversion is a colossal challenge. A deep knowledge of the biosynthesis and regulation of plant cell wall and a high-precision genome engineering toolset are the two essential pillars of efforts to alter plant cell walls and reduce biomass recalcitrance. The past decade has seen a meteoric rise in use of transcriptomics and high-resolution imaging methods resulting in fresh insights into composition, structure, formation and deconstruction of plant cell walls. Subsequent gene manipulation approaches, however, commonly include ubiquitous mis-expression of a single candidate gene in a host that carries an intact copy of the native gene. The challenges posed by pleiotropic and unintended changes resulting from such an approach are moving the field towards synthetic biology approaches. Synthetic biology builds on a systems biology knowledge base and leverages high-precision tools for high-throughput assembly of multigene constructs and pathways, precision genome editing and site-specific gene stacking, silencing and/or removal. Here, we summarize the recent breakthroughs in biosynthesis and remodelling of major secondary cell wall components, assess the impediments in obtaining a systems-level understanding and explore the potential opportunities in leveraging synthetic biology approaches to reduce biomass recalcitrance. Published 2014. This article is a U.S. Government work and is in the public domain in the USA. Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd.

  20. Long-Term Instrumentation, Information, and Control Systems (II&C) Modernization Future Vision and Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Kenneth Thomas

    2012-02-01

    Life extension beyond 60 years for the U.S operating nuclear fleet requires that instrumentation and control (I&C) systems be upgraded to address aging and reliability concerns. It is impractical for the legacy systems based on 1970's vintage technology operate over this extended time period. Indeed, utilities have successfully engaged in such replacements when dictated by these operational concerns. However, the replacements have been approached in a like-for-like manner, meaning that they do not take advantage of the inherent capabilities of digital technology to improve business functions. And so, the improvement in I&C system performance has not translated to bottom-line performance improvement for the fleet. Therefore, wide-scale modernization of the legacy I&C systems could prove to be cost-prohibitive unless the technology is implemented in a manner to enable significant business innovation as a means of off-setting the cost of upgrades. A Future Vision of a transformed nuclear plant operating model based on an integrated digital environment has been developed as part of the Advanced Instrumentation, Information, and Control (II&C) research pathway, under the Light Water Reactor (LWR) Sustainability Program. This is a research and development program sponsored by the U.S. Department of Energy (DOE), performed in close collaboration with the nuclear utility industry, to provide the technical foundations for licensing and managing the long-term, safe and economical operation of current nuclear power plants. DOE's program focus is on longer-term and higher-risk/reward research that contributes to the national policy objectives of energy security and environmental security . The Advanced II&C research pathway is being conducted by the Idaho National Laboratory (INL). The Future Vision is based on a digital architecture that encompasses all aspects of plant operations and support, integrating plant systems, plant work processes, and plant workers in a

  1. Long-Term Instrumentation, Information, and Control Systems (II&C) Modernization Future Vision and Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Kenneth Thomas; Bruce Hallbert

    2013-02-01

    Life extension beyond 60 years for the U.S operating nuclear fleet requires that instrumentation and control (I&C) systems be upgraded to address aging and reliability concerns. It is impractical for the legacy systems based on 1970’s vintage technology operate over this extended time period. Indeed, utilities have successfully engaged in such replacements when dictated by these operational concerns. However, the replacements have been approached in a like-for-like manner, meaning that they do not take advantage of the inherent capabilities of digital technology to improve business functions. And so, the improvement in I&C system performance has not translated to bottom-line performance improvement for the fleet. Therefore, wide-scale modernization of the legacy I&C systems could prove to be cost-prohibitive unless the technology is implemented in a manner to enable significant business innovation as a means of off-setting the cost of upgrades. A Future Vision of a transformed nuclear plant operating model based on an integrated digital environment has been developed as part of the Advanced Instrumentation, Information, and Control (II&C) research pathway, under the Light Water Reactor (LWR) Sustainability Program. This is a research and development program sponsored by the U.S. Department of Energy (DOE), performed in close collaboration with the nuclear utility industry, to provide the technical foundations for licensing and managing the long-term, safe and economical operation of current nuclear power plants. DOE’s program focus is on longer-term and higher-risk/reward research that contributes to the national policy objectives of energy security and environmental security . The Advanced II&C research pathway is being conducted by the Idaho National Laboratory (INL). The Future Vision is based on a digital architecture that encompasses all aspects of plant operations and support, integrating plant systems, plant work processes, and plant workers in a

  2. Long-Term Instrumentation, Information, and Control Systems (II and C) Modernization Future Vision and Strategy

    International Nuclear Information System (INIS)

    Thomas, Kenneth

    2012-01-01

    Life extension beyond 60 years for the U.S operating nuclear fleet requires that instrumentation and control (I and C) systems be upgraded to address aging and reliability concerns. It is impractical for the legacy systems based on 1970's vintage technology operate over this extended time period. Indeed, utilities have successfully engaged in such replacements when dictated by these operational concerns. However, the replacements have been approached in a like-for-like manner, meaning that they do not take advantage of the inherent capabilities of digital technology to improve business functions. And so, the improvement in I and C system performance has not translated to bottom-line performance improvement for the fleet. Therefore, wide-scale modernization of the legacy I and C systems could prove to be cost-prohibitive unless the technology is implemented in a manner to enable significant business innovation as a means of off-setting the cost of upgrades. A Future Vision of a transformed nuclear plant operating model based on an integrated digital environment has been developed as part of the Advanced Instrumentation, Information, and Control (II and C) research pathway, under the Light Water Reactor (LWR) Sustainability Program. This is a research and development program sponsored by the U.S. Department of Energy (DOE), performed in close collaboration with the nuclear utility industry, to provide the technical foundations for licensing and managing the long-term, safe and economical operation of current nuclear power plants. DOE's program focus is on longer-term and higher-risk/reward research that contributes to the national policy objectives of energy security and environmental security . The Advanced II and C research pathway is being conducted by the Idaho National Laboratory (INL). The Future Vision is based on a digital architecture that encompasses all aspects of plant operations and support, integrating plant systems, plant work processes, and plant

  3. Ensemble of different local descriptors, codebook generation methods and subwindow configurations for building a reliable computer vision system

    Directory of Open Access Journals (Sweden)

    Loris Nanni

    2014-04-01

    The MATLAB code of our system will be publicly available at http://www.dei.unipd.it/wdyn/?IDsezione=3314&IDgruppo_pass=124&preview=. Our free MATLAB toolbox can be used to verify the results of our system. We also hope that our toolbox will serve as the foundation for further explorations by other researchers in the computer vision field.

  4. Blends of synthetic and natural polymers as drug delivery systems for growth hormone.

    Science.gov (United States)

    Cascone, M G; Sim, B; Downes, S

    1995-05-01

    In order to overcome the biological deficiencies of synthetic polymers and to enhance the mechanical characteristics of natural polymers, two synthetic polymers, poly(vinyl alcohol) (PVA) and poly(acrylic acid) (PAA) were blended, in different ratios, with two biological polymers, collagen (C) and hyaluronic acid (HA). These blends were used to prepare films, sponges and hydrogels which were loaded with growth hormone (GH) to investigate their potential use as drug delivery systems. The GH release was monitored in vitro using a specific enzyme-linked immunosorbent assay. The results show that GH can be released from HA/PAA sponges and from HA/PVA and C/PVA hydrogels. The initial GH concentration used for sample loading affected the total quantity of GH released but not the pattern of release. The rate and quantity of GH released was significantly dependent on the HA or C content of the polymers.

  5. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope.

    Science.gov (United States)

    Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai

    2016-04-01

    We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Development and evaluation of a vision based poultry debone line monitoring system

    Science.gov (United States)

    Usher, Colin T.; Daley, W. D. R.

    2013-05-01

    Efficient deboning is key to optimizing production yield (maximizing the amount of meat removed from a chicken frame while reducing the presence of bones). Many processors evaluate the efficiency of their deboning lines through manual yield measurements, which involves using a special knife to scrape the chicken frame for any remaining meat after it has been deboned. Researchers with the Georgia Tech Research Institute (GTRI) have developed an automated vision system for estimating this yield loss by correlating image characteristics with the amount of meat left on a skeleton. The yield loss estimation is accomplished by the system's image processing algorithms, which correlates image intensity with meat thickness and calculates the total volume of meat remaining. The team has established a correlation between transmitted light intensity and meat thickness with an R2 of 0.94. Employing a special illuminated cone and targeted software algorithms, the system can make measurements in under a second and has up to a 90-percent correlation with yield measurements performed manually. This same system is also able to determine the probability of bone chips remaining in the output product. The system is able to determine the presence/absence of clavicle bones with an accuracy of approximately 95 percent and fan bones with an accuracy of approximately 80%. This paper describes in detail the approach and design of the system, results from field testing, and highlights the potential benefits that such a system can provide to the poultry processing industry.

  7. Synthetic Biology and Microbial Fuel Cells: Towards Self-Sustaining Life Support Systems

    Science.gov (United States)

    Hogan, John Andrew

    2014-01-01

    NASA ARC and the J. Craig Venter Institute (JCVI) collaborated to investigate the development of advanced microbial fuels cells (MFCs) for biological wastewater treatment and electricity production (electrogenesis). Synthetic biology techniques and integrated hardware advances were investigated to increase system efficiency and robustness, with the intent of increasing power self-sufficiency and potential product formation from carbon dioxide. MFCs possess numerous advantages for space missions, including rapid processing, reduced biomass and effective removal of organics, nitrogen and phosphorus. Project efforts include developing space-based MFC concepts, integration analyses, increasing energy efficiency, and investigating novel bioelectrochemical system applications

  8. Road Interpretation for Driver Assistance Based on an Early Cognitive Vision System

    DEFF Research Database (Denmark)

    Baseski, Emre; Jensen, Lars Baunegaard With; Pugeault, Nicolas

    2009-01-01

    scale maps of the road. We make use of temporal and spatial disambiguation mechanisms to increase the reliability of visually extracted 2D and 3D information. This information is then used to interpret the layout of the road by using lane markers that are detected via Bayesian reasoning. We also......In this work, we address the problem of road interpretation for driver assistance based on an early cognitive vision system. The structure of a road and the relevant traffic are interpreted in terms of ego-motion estimation of the car, independently moving objects on the road, lane markers and large...... estimate the ego-motion of the car which is used to create large scale maps of the road and also to detect independently moving objects. Sample results for the presented algorithms are shown on a stereo image sequence, that has been collected from a structured road....

  9. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    Directory of Open Access Journals (Sweden)

    Chunmei Liu

    2016-01-01

    Full Text Available This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour.

  10. A customized vision system for tracking humans wearing reflective safety clothing from industrial vehicles and machinery.

    Science.gov (United States)

    Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J

    2014-09-26

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions.

  11. A Customized Vision System for Tracking Humans Wearing Reflective Safety Clothing from Industrial Vehicles and Machinery

    Science.gov (United States)

    Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J.

    2014-01-01

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions. PMID:25264956

  12. Motorcycle That See: Multifocal Stereo Vision Sensor for Advanced Safety Systems in Tilting Vehicles

    Directory of Open Access Journals (Sweden)

    Gustavo Gil

    2018-01-01

    Full Text Available Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications.

  13. Motorcycles that See: Multifocal Stereo Vision Sensor for Advanced Safety Systems in Tilting Vehicles

    Science.gov (United States)

    2018-01-01

    Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications. PMID:29351267

  14. Motorcycle That See: Multifocal Stereo Vision Sensor for Advanced Safety Systems in Tilting Vehicles.

    Science.gov (United States)

    Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco

    2018-01-19

    Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications.

  15. Calibrating non-central catadioptric vision system using local target reconstruction

    International Nuclear Information System (INIS)

    Zhou, Fuqiang; Chen, Xin; Chai, Xinghua; Tan, Haishu

    2017-01-01

    In a traditional catadioptric calibration process, a calibration target placed in different views is required and it leads to complexity in the calibration process. In order to simplify the calibration process of a non-central catadioptric vision system, we developed a local target reconstruction method used a circle-square-combined target. Each circle-square on the target is regarded as an independent sub-region. According to the local mapping of the sub-regions, these can be reconstructed based on the curvature of a curved mirror and the characteristics of the circle-square-combined target. The overall procedure requires a single image, due to the reconstructed sub-regions. To evaluate the performance of the proposed method, real experiments have been carried out and the results show that the proposed method is reliable and efficient. (paper)

  16. Road following for blindBike: an assistive bike navigation system for low vision persons

    Science.gov (United States)

    Grewe, Lynne; Overell, William

    2017-05-01

    Road Following is a critical component of blindBike, our assistive biking application for the visually impaired. This paper talks about the overall blindBike system and goals prominently featuring Road Following, which is the task of directing the user to follow the right side of the road. This work unlike what is commonly found for self-driving cars does not depend on lane line markings. 2D computer vision techniques are explored to solve the problem of Road Following. Statistical techniques including the use of Gaussian Mixture Models are employed. blindBike is developed as an Android Application and is running on a smartphone device. Other sensors including Gyroscope and GPS are utilized. Both Urban and suburban scenarios are tested and results are given. The success and challenges faced by blindBike's Road Following module are presented along with future avenues of work.

  17. Monocular Vision- and IMU-Based System for Prosthesis Pose Estimation During Total Hip Replacement Surgery.

    Science.gov (United States)

    Su, Shaojie; Zhou, Yixin; Wang, Zhihua; Chen, Hong

    2017-06-01

    The average age of population increases worldwide, so does the number of total hip replacement surgeries. Total hip replacement, however, often involves a risk of dislocation and prosthetic impingement. To minimize the risk after surgery, we propose an instrumented hip prosthesis that estimates the relative pose between prostheses intraoperatively and ensures the placement of prostheses within a safe zone. We create a model of the hip prosthesis as a ball and socket joint, which has four degrees of freedom (DOFs), including 3-DOF rotation and 1-DOF translation. We mount a camera and an inertial measurement unit (IMU) inside the hollow ball, or "femoral head prosthesis," while printing customized patterns on the internal surface of the socket, or "acetabular cup." Since the sensors were rigidly fixed to the femoral head prosthesis, measuring its motions poses a sensor ego-motion estimation problem. By matching feature points in images of the reference patterns, we propose a monocular vision based method with a relative error of less than 7% in the 3-DOF rotation and 8% in the 1-DOF translation. Further, to reduce system power consumption, we apply the IMU with its data fused by an extended Kalman filter to replace the camera in the 3-DOF rotation estimation, which yields a less than 4.8% relative error and a 21.6% decrease in power consumption. Experimental results show that the best approach to prosthesis pose estimation is a combination of monocular vision-based translation estimation and IMU-based rotation estimation, and we have verified the feasibility and validity of this system in prosthesis pose estimation.

  18. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    Energy Technology Data Exchange (ETDEWEB)

    Jha, Sumit Kumar [University of Central Florida, Orlando; Pullum, Laura L [ORNL; Ramanathan, Arvind [ORNL

    2016-01-01

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studying the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.

  19. X-ray-based machine vision system for distal locking of intramedullary nails.

    Science.gov (United States)

    Juneho, F; Bouazza-Marouf, K; Kerr, D; Taylor, A J; Taylor, G J S

    2007-05-01

    In surgical procedures for femoral shaft fracture treatment, current techniques for locking the distal end of intramedullary nails, using two screws, rely heavily on the use of two-dimensional X-ray images to guide three-dimensional bone drilling processes. Therefore, a large number of X-ray images are required, as the surgeon uses his/her skills and experience to locate the distal hole axes on the intramedullary nail. The long-term effects of X-ray radiation and their relation to different types of cancer still remain uncertain. Therefore, there is a need to develop a surgical technique that can limit the use of X-rays during the distal locking procedure. A robotic-assisted orthopaedic surgery system has been developed at Loughborough University to assist orthopaedic surgeons by reducing the irradiation involved in such operations. The system simplifies the current approach as it uses only two near-orthogonal X-ray images to determine the drilling trajectory of the distal locking holes, thereby considerably reducing irradiation to both the surgeon and patient. Furthermore, the system uses robust machine vision features to reduce the surgeon's interaction with the system, thus reducing the overall operating time. Laboratory test results have shown that the proposed system is very robust in the presence of variable noise and contrast in the X-ray images.

  20. Synthetic wind speed scenarios generation for probabilistic analysis of hybrid energy systems

    International Nuclear Information System (INIS)

    Chen, Jun; Rabiti, Cristian

    2017-01-01

    Hybrid energy systems consisting of multiple energy inputs and multiple energy outputs have been proposed to be an effective element to enable ever increasing penetration of clean energy. In order to better understand the dynamic and probabilistic behavior of hybrid energy systems, this paper proposes a model combining Fourier series and autoregressive moving average (ARMA) to characterize historical weather measurements and to generate synthetic weather (e.g., wind speed) data. In particular, Fourier series is used to characterize the seasonal trend in historical data, while ARMA is applied to capture the autocorrelation in residue time series (e.g., measurements with seasonal trends subtracted). The generated synthetic wind speed data is then utilized to perform probabilistic analysis of a particular hybrid energy system configuration, which consists of nuclear power plant, wind farm, battery storage, natural gas boiler, and chemical plant. Requirements on component ramping rate, economic and environmental impacts of hybrid energy systems, and the effects of deploying different sizes of batteries in smoothing renewable variability, are all investigated. - Highlights: • Computational model to synthesize artificial wind speed data with consistent characteristics with database. • Fourier series to capture seasonal trends in the database. • Monte Carlo simulation and probabilistic analysis of hybrid energy systems. • Investigation of the effect of battery in smoothing variability of wind power generation.

  1. Application of synthetic fire-resistant oils in oil systems of turbine equipment for NPPs

    Science.gov (United States)

    Galimova, L. A.

    2017-10-01

    Results of the investigation of the synthetic fire-resistant turbine oil Fyrquel-L state in oil systems of turbosets under their operation in the equipment and oil supply facilities of nuclear power plants (NPPs) are presented. On the basis of the analysis of the operating experience, it is established that, for reliable and safe operation of the turbine equipment, at which oil systems synthetic fire-resistant oils on the phosphoric acid esters basis are used, special attention should be paid to two main factors, namely, both the guarantee of the normalized oil water content under the operation and storage and temperature regime of the operation. Methods of the acid number maintenance and reduction are shown. Results of the analysis and investigation of influence of temperature and of the variation of the qualitative state of the synthetic fair-resistant oil on its water content are reported. It is shown that the fire-resistant turbine oils are characterized by high hydrophilicity, and, in distinction to the mineral turbine oils, are capable to contain a significant amount of dissolved water, which is not extracted under the use of separation technologies. It is shown that the more degradation products are contained in oil and higher acid number, the more amount of dissolved water it is capable to retain. It is demonstrated that the organization of chemical control of the total water content of fireresistant oils with the use of the coulometric method is an important element to support the reliable operation of oil systems. It is recommended to use automatic controls of water content for organization of daily monitoring of oil state in the oil system. Recommendations and measures for improvement of oil operation on the NPP, the water content control, the use of oil cleaning plants, and the oil transfer for storage during repair works are developed.

  2. Human Factors Engineering as a System in the Vision for Exploration

    Science.gov (United States)

    Whitmore, Mihriban; Smith, Danielle; Holden, Kritina

    2006-01-01

    In order to accomplish NASA's Vision for Exploration, while assuring crew safety and productivity, human performance issues must be well integrated into system design from mission conception. To that end, a two-year Technology Development Project (TDP) was funded by NASA Headquarters to develop a systematic method for including the human as a system in NASA's Vision for Exploration. The specific goals of this project are to review current Human Systems Integration (HSI) standards (i.e., industry, military, NASA) and tailor them to selected NASA Exploration activities. Once the methods are proven in the selected domains, a plan will be developed to expand the effort to a wider scope of Exploration activities. The methods will be documented for inclusion in NASA-specific documents (such as the Human Systems Integration Standards, NASA-STD-3000) to be used in future space systems. The current project builds on a previous TDP dealing with Human Factors Engineering processes. That project identified the key phases of the current NASA design lifecycle, and outlined the recommended HFE activities that should be incorporated at each phase. The project also resulted in a prototype of a webbased HFE process tool that could be used to support an ideal HFE development process at NASA. This will help to augment the limited human factors resources available by providing a web-based tool that explains the importance of human factors, teaches a recommended process, and then provides the instructions, templates and examples to carry out the process steps. The HFE activities identified by the previous TDP are being tested in situ for the current effort through support to a specific NASA Exploration activity. Currently, HFE personnel are working with systems engineering personnel to identify HSI impacts for lunar exploration by facilitating the generation of systemlevel Concepts of Operations (ConOps). For example, medical operations scenarios have been generated for lunar habitation

  3. Static and fatigue biomechanical properties of anterior thoracolumbar instrumentation systems. A synthetic testing model.

    Science.gov (United States)

    Kotani, Y; Cunningham, B W; Parker, L M; Kanayama, M; McAfee, P C

    1999-07-15

    A mechanical testing standard for anterior thoracolumbar instrumentation systems was introduced, using a synthetic model. Twelve recent instrumentation systems were tested in static and fatigue modes. To establish the testing standard for anterior thoracolumbar instrumentation systems using a synthetic model and to evaluate the static and fatigue biomechanical properties of 12 anterior thoracolumbar instrumentation systems. Although numerous studies have been performed to evaluate the biomechanics of anterior spinal instrumentation using a cadaveric or animal tissue, problems of specimen variation, lack of reproducibility, and inability to perform fatigue testing have been pointed out. In no studies has a precise synthetic testing standard for anterior thoracolumbar instrumentation systems been described. An ultra-high-molecular-weight polyethylene cylinder was designed according to the anatomic dimensions of the vertebral body. Two cylinders spanned by spinal instrumentation simulated a total corpectomy defect, and a compressive lateral bending load was applied. The instrumentation assembly was precisely standardized. The static destructive and fatigue tests up to 2 million cycles at three load levels were conducted, followed by the failure mode analysis. Twelve anterior instrumentation systems, consisting of five plate and seven rod systems were compared in stiffness, bending strength, and cycles to failure. Static and fatigue test parameters both demonstrated highly significant differences between devices. The stiffness ranged from 280.5 kN/m in the Synthes plate (Synthes, Paoli, PA) to 67.9 kN/m in the Z-plate ATL (SofamorDanek, Memphis, TN). The Synthes plate and Kaneda SR titanium (AcroMed, Cleveland, OH) formed the highest subset in bending strength of 1516.1 N and 1209.9 N, respectively, whereas the Z-plate showed the lowest value of 407.3 N. There were no substantial differences between plate and rod devices. In fatigue, only three systems: Synthes plate

  4. Computer Vision Based Smart Lane Departure Warning System for Vehicle Dynamics Control

    Directory of Open Access Journals (Sweden)

    Ambarish G. Mohapatra

    2011-09-01

    Full Text Available Collision Avoidance System solves many problems caused by traffic congestion worldwide and a synergy of new information technologies for simulation, real-time control and communications networks. The above system is characterized as an intelligent vehicle system. Traffic congestion has been increasing world-wide as a result of increased motorization, urbanization, population growth and changes in population density. Congestion reduces utilization of the transportation infrastructure and increases travel time, air pollution, fuel consumption and most importantly traffic accidents. The main objective of this work is to develop a machine vision system for lane departure detection and warning to measure the lane related parameters such as heading angle, lateral deviation, yaw rate and sideslip angle from the road scene image using standard image processing technique that can be used for automation of steering a motor vehicle. The exact position of the steering wheel can be monitored using a steering wheel sensor. This core part of this work is based on Hough transformation based edge detection technique for the detection of lane departure parameters. The prototype designed for this work has been tested in a running vehicle for the monitoring of real-time lane related parameters.

  5. Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting.

    Science.gov (United States)

    Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing

    2016-03-04

    Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1-2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell's natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery.

  6. An inexpensive Arduino-based LED stimulator system for vision research.

    Science.gov (United States)

    Teikari, Petteri; Najjar, Raymond P; Malkki, Hemi; Knoblauch, Kenneth; Dumortier, Dominique; Gronfier, Claude; Cooper, Howard M

    2012-11-15

    Light emitting diodes (LEDs) are being used increasingly as light sources in life sciences applications such as in vision research, fluorescence microscopy and in brain-computer interfacing. Here we present an inexpensive but effective visual stimulator based on light emitting diodes (LEDs) and open-source Arduino microcontroller prototyping platform. The main design goal of our system was to use off-the-shelf and open-source components as much as possible, and to reduce design complexity allowing use of the system to end-users without advanced electronics skills. The main core of the system is a USB-connected Arduino microcontroller platform designed initially with a specific emphasis on the ease-of-use creating interactive physical computing environments. The pulse-width modulation (PWM) signal of Arduino was used to drive LEDs allowing linear light intensity control. The visual stimulator was demonstrated in applications such as murine pupillometry, rodent models for cognitive research, and heterochromatic flicker photometry in human psychophysics. These examples illustrate some of the possible applications that can be easily implemented and that are advantageous for students, educational purposes and universities with limited resources. The LED stimulator system was developed as an open-source project. Software interface was developed using Python with simplified examples provided for Matlab and LabVIEW. Source code and hardware information are distributed under the GNU General Public Licence (GPL, version 3). Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting

    Science.gov (United States)

    Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing

    2016-03-01

    Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1-2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell’s natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery.

  8. A System of Driving Fatigue Detection Based on Machine Vision and Its Application on Smart Device

    Directory of Open Access Journals (Sweden)

    Wanzeng Kong

    2015-01-01

    Full Text Available Driving fatigue is one of the most important factors in traffic accidents. In this paper, we proposed an improved strategy and practical system to detect driving fatigue based on machine vision and Adaboost algorithm. Kinds of face and eye classifiers are well trained by Adaboost algorithm in advance. The proposed strategy firstly detects face efficiently by classifiers of front face and deflected face. Then, candidate region of eye is determined according to geometric distribution of facial organs. Finally, trained classifiers of open eyes and closed eyes are used to detect eyes in the candidate region quickly and accurately. The indexes which consist of PERCLOS and duration of closed-state are extracted in video frames real time. Moreover, the system is transplanted into smart device, that is, smartphone or tablet, due to its own camera and powerful calculation performance. Practical tests demonstrated that the proposed system can detect driver fatigue with real time and high accuracy. As the system has been planted into portable smart device, it could be widely used for driving fatigue detection in daily life.

  9. Computer vision based method and system for online measurement of geometric parameters of train wheel sets.

    Science.gov (United States)

    Zhang, Zhi-Feng; Gao, Zhan; Liu, Yuan-Yuan; Jiang, Feng-Chun; Yang, Yan-Li; Ren, Yu-Fen; Yang, Hong-Jun; Yang, Kun; Zhang, Xiao-Dong

    2012-01-01

    Train wheel sets must be periodically inspected for possible or actual premature failures and it is very significant to record the wear history for the full life of utilization of wheel sets. This means that an online measuring system could be of great benefit to overall process control. An online non-contact method for measuring a wheel set's geometric parameters based on the opto-electronic measuring technique is presented in this paper. A charge coupled device (CCD) camera with a selected optical lens and a frame grabber was used to capture the image of the light profile of the wheel set illuminated by a linear laser. The analogue signals of the image were transformed into corresponding digital grey level values. The 'mapping function method' is used to transform an image pixel coordinate to a space coordinate. The images of wheel sets were captured when the train passed through the measuring system. The rim inside thickness and flange thickness were measured and analyzed. The spatial resolution of the whole image capturing system is about 0.33 mm. Theoretic and experimental results show that the online measurement system based on computer vision can meet wheel set measurement requirements.

  10. A method to evaluate residual phase error for polar formatted synthetic aperture radar systems

    Science.gov (United States)

    Musgrove, Cameron; Naething, Richard

    2013-05-01

    Synthetic aperture radar systems that use the polar format algorithm are subject to a focused scene size limit inherent to the polar format algorithm. The classic focused scene size limit is determined from the dominant residual range phase error term. Given the many sources of phase error in a synthetic aperture radar, a system designer is interested in how much phase error results from the assumptions made with the polar format algorithm. Autofocus algorithms have limits to the amount and type of phase error that can be corrected. Current methods correct only one or a few terms of the residual phase error. A system designer needs to be able to evaluate the contribution of the residual or uncorrected phase error terms to determine the new focused scene size limit. This paper describes a method to estimate the complete residual phase error, not just one or a few of the dominant residual terms. This method is demonstrated with polar format image formation, but is equally applicable to other image formation algorithms. A benefit for the system designer is that additional correction terms can be added or deleted from the analysis as necessary to evaluate the resulting effect upon image quality.

  11. Two novel solvent system compositions for protected synthetic peptide purification by centrifugal partition chromatography.

    Science.gov (United States)

    Amarouche, Nassima; Giraud, Matthieu; Forni, Luciano; Butte, Alessandro; Edwards, F; Borie, Nicolas; Renault, Jean-Hugues

    2014-04-11

    Protected synthetic peptide intermediates are often hydrophobic and not soluble in most common solvents. They are thus difficult to purify by preparative reversed-phase high-performance liquid chromatography (RP-HPLC), usually used for industrial production. It is then challenging to develop alternative chromatographic purification processes. Support-free liquid-liquid chromatographic techniques, including both hydrostatic (centrifugal partition chromatography or CPC) and hydrodynamic (counter-current chromatography or CCC) devices, are mainly involved in phytochemical studies but have also been applied to synthetic peptide purification. In this framework, two new biphasic solvent system compositions covering a wide range of polarity were developed to overcome solubility problems mentioned above. The new systems composed of heptane/tetrahydrofuran/acetonitrile/dimethylsulfoxide/water and heptane/methyl-tetrahydrofuran/N-methylpyrrolidone/water were efficiently used for the CPC purification of a 39-mer protected exenatide (Byetta®) and a 8-mer protected peptide intermediate of bivalirudin (Angiox®) synthesis. Phase compositions of the different biphasic solvent systems were determined by (1)H nuclear magnetic resonance. Physico-chemical properties including viscosity, density and interfacial tension of these biphasic systems are also described. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. In vivo visualization of robotically implemented synthetic tracked aperture ultrasound (STRATUS) imaging system using curvilinear array

    Science.gov (United States)

    Zhang, Haichong K.; Aalamifar, Fereshteh; Boctor, Emad M.

    2016-04-01

    Synthetic aperture for ultrasound is a technique utilizing a wide aperture in both transmit and receive to enhance the ultrasound image quality. The limitation of synthetic aperture is the maximum available aperture size limit determined by the physical size of ultrasound probe. We propose Synthetic-Tracked Aperture Ultrasound (STRATUS) imaging system to overcome the limitation by extending the beamforming aperture size through ultrasound probe tracking. With a setup involving a robotic arm, the ultrasound probe is moved using the robotic arm, while the positions on a scanning trajectory are tracked in real-time. Data from each pose are synthesized to construct a high resolution image. In previous studies, we have demonstrated the feasibility through phantom experiments. However, various additional factors such as real-time data collection or motion artifacts should be taken into account when the in vivo target becomes the subject. In this work, we build a robot-based STRATUS imaging system with continuous data collection capability considering the practical implementation. A curvilinear array is used instead of a linear array to benefit from its wider capture angle. We scanned human forearms under two scenarios: one submerged the arm in the water tank under 10 cm depth, and the other directly scanned the arm from the surface. The image contrast improved 5.51 dB, and 9.96 dB for the underwater scan and the direct scan, respectively. The result indicates the practical feasibility of STRATUS imaging system, and the technique can be potentially applied to the wide range of human body.

  13. New Synthetic Biology Tools to Track Microbial Dynamics in the Earth System

    Science.gov (United States)

    Silberg, J. J.; Masiello, C. A.; Cheng, H. Y.

    2015-12-01

    Microbes drive processes in the Earth system far exceeding their physical scale, mediating significant fluxes in the global C and N cycles. The tools of synthetic biology have the potential to significantly improve our understanding of microbes' role in the Earth system; however, these tools have not yet seen wide laboratory use because synthetically "programmed" microbes typically report by fluorescing (expressing green fluorescent protein), making them challenging to deploy into many Earth materials, the majority of which are not transparent and are heterogeneous (soils, sediments, and biomass). We are developing a new suite of biosensors that report instead by releasing gases. We will provide an overview of the use of gas-reporting biosensors in biogeochemistry and will report the development of the systematics of these sensors. These sensors will make tractable the testing of gene expression hypotheses derived from metagenomics data. Examples of processes that could be tracked non-invasively with gas sensors include coordination of biofilm formation, nitrification, rhizobial infection of plant roots, and at least some forms of methanogenesis, all of which are managed by an easily-engineered cell-cell communication system. Another relatively simple process to track with gas sensors is horizontal gene transfer. Successful development of gas biosensors for Earth science applications will require addressing issues including: engineering the intensity and selectivity of microbial gas production to maximize the signal to noise using the tools of synthetic biology; normalizing the gas reporter signal to cell population size, since the number of cells and gene expression both contribute to gas production; managing gas diffusion effects on signal shape; and developing multiple gases that can be used in parallel to report on multiple biological processes in parallel. We will report on progress addressing each of these issues.

  14. Detection of Two Types of Weed through Machine Vision System: Improving Site-Specific Spraying

    Directory of Open Access Journals (Sweden)

    S Sabzi

    2018-03-01

    Full Text Available Introduction With increase in world population, one of the approaches to provide food is using site-specific management system or so-called precision farming. In this management system, management of crop production inputs such as fertilizers, lime, herbicides, seed, etc. is done based on farm location features, with the aim of reducing waste, increasing revenues and maintaining environmental quality. Precision farming involves various aspects and is applicable on farm fields at all stages of tillage, planting, and harvesting. Today, in line with precision farming purposes, and to control weeds, pests, and diseases, all the efforts of specialists in precision farming is to reduce the amount of chemical substances in products. Although herbicides improve the quality and quantity of agricultural production, the possibility of applying inappropriately and unreasonably is very high. If the dose is too low, weed control is not performed correctly. Otherwise, If the dosage is too high, herbicides can be toxic for crops, can be transferred to soil and stay in it for a long time, and can penetrate to groundwater. By applying herbicides to variable rate, the potential for significant cost savings and reduced environmental damage to the products and environment will be possible. It is evident that in large-scale modern agriculture, individual management of each plant without using some advanced technologies is not possible. using machine vision systems is one of precision farming techniques to identify weeds. This study aimed to detect three plant such as Centaurea depressa M.B, Malvaneglecta and Potato plant using machine vision system. Materials and Methods In order to train algorithm of designed machine vision system, a platform that moved with the speed of 10.34 was used for shooting of Marfona potato fields. This platform was consisted of a chassis, camera (DFK23GM021,CMOS, 120 f/s, Made in Germany, and a processor system equipped with Matlab 2015

  15. Synthetic Biology and the U.S. Biotechnology Regulatory System: Challenges and Options

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Sarah R. [J. Craig Venter Inst., Rockville, MD (United States); Rodemeyer, Michael [Univ. of Virginia, Charlottesville, VA (United States); Garfinkel, Michele S. [EMBO, Heidelberg (Germany); Friedman, Robert M. [J. Craig Venter Inst., Rockville, MD (United States)

    2014-05-01

    Synthetic Biology and the U.S. Biotechnology Regulatory System: Challenges and Options Sarah R. Carter, Ph.D., J. Craig Venter Institute; Michael Rodemeyer, J.D., University of Virginia; Michele S. Garfinkel, Ph.D., EMBO; Robert M. Friedman, Ph.D., J. Craig Venter Institute In recent years, a range of genetic engineering techniques referred to as “synthetic biology” has significantly expanded the tool kit available to scientists and engineers, providing them with far greater capabilities to engineer organisms than previous techniques allowed. The field of synthetic biology includes the relatively new ability to synthesize long pieces of DNA from chemicals, as well as improved methods for genetic manipulation and design of genetic pathways to achieve more precise control of biological systems. These advances will help usher in a new generation of genetically engineered microbes, plants, and animals. The JCVI Policy Center team, along with researchers at the University of Virginia and EMBO, examined how well the current U.S. regulatory system for genetically engineered products will handle the near-term introduction of organisms engineered using synthetic biology. In particular, the focus was on those organisms intended to be used or grown directly in the environment, outside of a contained facility. The study concludes that the U.S. regulatory agencies have adequate legal authority to address most, but not all, potential environmental, health and safety concerns posed by these organisms. Such near-term products are likely to represent incremental changes rather than a marked departure from previous genetically engineered organisms. However, the study also identified two key challenges for the regulatory system, which are detailed in the report. First, USDA’s authority over genetically engineered plants depends on the use of an older engineering technique that is no longer necessary for many applications. The shift to synthetic biology and other newer genetic

  16. Synthetic Aperture Radar Data Processing on an FPGA Multi-Core System

    DEFF Research Database (Denmark)

    Schleuniger, Pascal; Kusk, Anders; Dall, Jørgen

    2013-01-01

    Synthetic aperture radar, SAR, is a high resolution imaging radar. The direct back-projection algorithm allows for a precise SAR output image reconstruction and can compensate for deviations in the flight track of airborne radars. Often graphic processing units, GPUs are used for data processing...... as the back-projection algorithm is computationally expensive and highly parallel. However, GPUs may not be an appropriate solution for applications with strictly constrained space and power requirements. In this paper, we describe how we map a SAR direct back-projection application to a multi-core system...

  17. Image formation simulation for computer-aided inspection planning of machine vision systems

    Science.gov (United States)

    Irgenfried, Stephan; Bergmann, Stephan; Mohammadikaji, Mahsa; Beyerer, Jürgen; Dachsbacher, Carsten; Wörn, Heinz

    2017-06-01

    In this work, a simulation toolset for Computer Aided Inspection Planning (CAIP) of systems for automated optical inspection (AOI) is presented along with a versatile two-robot-setup for verification of simulation and system planning results. The toolset helps to narrow down the large design space of optical inspection systems in interaction with a system expert. The image formation taking place in optical inspection systems is simulated using GPU-based real time graphics and high quality off-line-rendering. The simulation pipeline allows a stepwise optimization of the system, from fast evaluation of surface patch visibility based on real time graphics up to evaluation of image processing results based on off-line global illumination calculation. A focus of this work is on the dependency of simulation quality on measuring, modeling and parameterizing the optical surface properties of the object to be inspected. The applicability to real world problems is demonstrated by taking the example of planning a 3D laser scanner application. Qualitative and quantitative comparison results of synthetic and real images are presented.

  18. VISION development

    International Nuclear Information System (INIS)

    Hernandez, J.E.; Sherwood, R.J.; Whitman, S.R.

    1994-01-01

    VISION is a flexible and extensible object-oriented programming environment for prototyping computer-vision and pattern-recognition algorithms. This year's effort focused on three major areas: documentation, graphics, and support for new applications

  19. Vision-based interaction

    CERN Document Server

    Turk, Matthew

    2013-01-01

    In its early years, the field of computer vision was largely motivated by researchers seeking computational models of biological vision and solutions to practical problems in manufacturing, defense, and medicine. For the past two decades or so, there has been an increasing interest in computer vision as an input modality in the context of human-computer interaction. Such vision-based interaction can endow interactive systems with visual capabilities similar to those important to human-human interaction, in order to perceive non-verbal cues and incorporate this information in applications such

  20. A Vision-Based Automated Guided Vehicle System with Marker Recognition for Indoor Use

    Science.gov (United States)

    Lee, Jeisung; Hyun, Chang-Ho; Park, Mignon

    2013-01-01

    We propose an intelligent vision-based Automated Guided Vehicle (AGV) system using fiduciary markers. In this paper, we explore a low-cost, efficient vehicle guiding method using a consumer grade web camera and fiduciary markers. In the proposed method, the system uses fiduciary markers with a capital letter or triangle indicating direction in it. The markers are very easy to produce, manipulate, and maintain. The marker information is used to guide a vehicle. We use hue and saturation values in the image to extract marker candidates. When the known size fiduciary marker is detected by using a bird's eye view and Hough transform, the positional relation between the marker and the vehicle can be calculated. To recognize the character in the marker, a distance transform is used. The probability of feature matching was calculated by using a distance transform, and a feature having high probability is selected as a captured marker. Four directional signals and 10 alphabet features are defined and used as markers. A 98.87% recognition rate was achieved in the testing phase. The experimental results with the fiduciary marker show that the proposed method is a solution for an indoor AGV system. PMID:23966180

  1. A Real-Time Range Finding System with Binocular Stereo Vision

    Directory of Open Access Journals (Sweden)

    Xiao-Bo Lai

    2012-05-01

    Full Text Available To acquire range information for mobile robots, a TMS320DM642 DSP-based range finding system with binocular stereo vision is proposed. Firstly, paired images of the target are captured and a Gaussian filter, as well as improved Sobel kernels, are achieved. Secondly, a feature-based local stereo matching algorithm is performed so that the space location of the target can be determined. Finally, in order to improve the reliability and robustness of the stereo matching algorithm under complex conditions, the confidence filter and the left-right consistency filter are investigated to eliminate the mismatching points. In addition, the range finding algorithm is implemented in the DSP/BIOS operating system to gain real-time control. Experimental results show that the average accuracy of range finding is more than 99% for measuring single-point distances equal to 120cm in the simple scenario and the algorithm takes about 39ms for ranging a time in a complex scenario. The effectivity, as well as the feasibility, of the proposed range finding system are verified.

  2. Information theory analysis of sensor-array imaging systems for computer vision

    Science.gov (United States)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.; Self, M. O.

    1983-01-01

    Information theory is used to assess the performance of sensor-array imaging systems, with emphasis on the performance obtained with image-plane signal processing. By electronically controlling the spatial response of the imaging system, as suggested by the mechanism of human vision, it is possible to trade-off edge enhancement for sensitivity, increase dynamic range, and reduce data transmission. Computational results show that: signal information density varies little with large variations in the statistical properties of random radiance fields; most information (generally about 85 to 95 percent) is contained in the signal intensity transitions rather than levels; and performance is optimized when the OTF of the imaging system is nearly limited to the sampling passband to minimize aliasing at the cost of blurring, and the SNR is very high to permit the retrieval of small spatial detail from the extensively blurred signal. Shading the lens aperture transmittance to increase depth of field and using a regular hexagonal sensor-array instead of square lattice to decrease sensitivity to edge orientation also improves the signal information density up to about 30 percent at high SNRs.

  3. Japan's universal long-term care system reform of 2005: containing costs and realizing a vision.

    Science.gov (United States)

    Tsutsui, Takako; Muramatsu, Naoko

    2007-09-01

    Japan implemented a mandatory social long-term care insurance (LTCI) system in 2000, making long-term care services a universal entitlement for every senior. Although this system has grown rapidly, reflecting its popularity among seniors and their families, it faces several challenges, including skyrocketing costs. This article describes the recent reform initiated by the Japanese government to simultaneously contain costs and realize a long-term vision of creating a community-based, prevention-oriented long-term care system. The reform involves introduction of two major elements: "hotel" and meal charges for nursing home residents and new preventive benefits. They were intended to reduce economic incentives for institutionalization, dampen provider-induced demand, and prevent seniors from being dependent by intervening while their need levels are still low. The ongoing LTCI reform should be critically evaluated against the government's policy intentions as well as its effect on seniors, their families, and society. The story of this reform is instructive for other countries striving to develop coherent, politically acceptable long-term care policies.

  4. A vision-based automated guided vehicle system with marker recognition for indoor use.

    Science.gov (United States)

    Lee, Jeisung; Hyun, Chang-Ho; Park, Mignon

    2013-08-07

    We propose an intelligent vision-based Automated Guided Vehicle (AGV) system using fiduciary markers. In this paper, we explore a low-cost, efficient vehicle guiding method using a consumer grade web camera and fiduciary markers. In the proposed method, the system uses fiduciary markers with a capital letter or triangle indicating direction in it. The markers are very easy to produce, manipulate, and maintain. The marker information is used to guide a vehicle. We use hue and saturation values in the image to extract marker candidates. When the known size fiduciary marker is detected by using a bird's eye view and Hough transform, the positional relation between the marker and the vehicle can be calculated. To recognize the character in the marker, a distance transform is used. The probability of feature matching was calculated by using a distance transform, and a feature having high probability is selected as a captured marker. Four directional signals and 10 alphabet features are defined and used as markers. A 98.87% recognition rate was achieved in the testing phase. The experimental results with the fiduciary marker show that the proposed method is a solution for an indoor AGV system.

  5. Recent Progress on Systems and Synthetic Biology Approaches to Engineer Fungi As Microbial Cell Factories.

    Science.gov (United States)

    Amores, Gerardo Ruiz; Guazzaroni, María-Eugenia; Arruda, Letícia Magalhães; Silva-Rocha, Rafael

    2016-04-01

    Filamentous fungi are remarkable organisms naturally specialized in deconstructing plant biomass and this feature has a tremendous potential for biofuel production from renewable sources. The past decades have been marked by a remarkable progress in the genetic engineering of fungi to generate industry-compatible strains needed for some biotech applications. In this sense, progress in this field has been marked by the utilization of high-throughput techniques to gain deep understanding of the molecular machinery controlling the physiology of these organisms, starting thus the Systems Biology era of fungi. Additionally, genetic engineering has been extensively applied to modify wellcharacterized promoters in order to construct new expression systems with enhanced performance under the conditions of interest. In this review, we discuss some aspects related to significant progress in the understating and engineering of fungi for biotechnological applications, with special focus on the construction of synthetic promoters and circuits in organisms relevant for industry. Different engineering approaches are shown, and their potential and limitations for the construction of complex synthetic circuits in these organisms are examined. Finally, we discuss the impact of engineered promoter architecture in the single-cell behavior of the system, an often-neglected relationship with a tremendous impact in the final performance of the process of interest. We expect to provide here some new directions to drive future research directed to the construction of high-performance, engineered fungal strains working as microbial cell factories.

  6. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems

    Directory of Open Access Journals (Sweden)

    Amedeo Rodi Vetrella

    2016-12-01

    Full Text Available Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS receivers and Micro-Electro-Mechanical Systems (MEMS-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  7. Performance of the CellaVision ® DM96 system for detecting red blood cell morphologic abnormalities

    Directory of Open Access Journals (Sweden)

    Christopher L Horn

    2015-01-01

    Full Text Available Background: Red blood cell (RBC analysis is a key feature in the evaluation of hematological disorders. The gold standard light microscopy technique has high sensitivity, but is a relativity time-consuming and labor intensive procedure. This study tested the sensitivity and specificity of gold standard light microscopy manual differential to the CellaVision ® DM96 (CCS; CellaVision, Lund, Sweden automated image analysis system, which takes digital images of samples at high magnification and compares these images with an artificial neural network based on a database of cells and preclassified according to RBC morphology. Methods: In this study, 212 abnormal peripheral blood smears within the Calgary Laboratory Services network of hospital laboratories were selected and assessed for 15 different RBC morphologic abnormalities by manual microscopy. The same samples were reassessed as a manual addition from the instrument screen using the CellaVision ® DM96 system with 8 microscope high power fields (×100 objective and a 22 mm ocular. The results of the investigation were then used to calculate the sensitivity and specificity of the CellaVision ® DM96 system in reference to light microscopy. Results: The sensitivity ranged from a low of 33% (RBC agglutination to a high of 100% (sickle cells, stomatocytes. The remainder of the RBC abnormalities tested somewhere between these two extremes. The specificity ranged from 84% (schistocytes to 99.5% (sickle cells, stomatocytes. Conclusions: Our results showed generally high specificities but variable sensitivities for RBC morphologic abnormalities.

  8. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  9. An advanced vision-based system for real-time displacement measurement of high-rise buildings

    International Nuclear Information System (INIS)

    Lee, Jong-Han; Ho, Hoai-Nam; Lee, Jong-Jae; Shinozuka, Masanobu

    2012-01-01

    This paper introduces an advanced vision-based system for dynamic real-time displacement measurement of high-rise buildings using a partitioning approach. The partitioning method is based on the successive estimation of relative displacements and rotational angles at several floors using a multiple vision-based displacement measurement system. In this study, two significant improvements were made to realize the partitioning method: (1) time synchronization, (2) real-time dynamic measurement. Displacement data and time synchronization information are wirelessly transferred via a network using the TCP/IP protocol. The time synchronization process is periodically conducted by the master system to guarantee the system time at the master and slave systems are synchronized. The slave system is capable of dynamic real-time measurement and it is possible to economically expand measurement points at slave levels using commercial devices. To verify the accuracy and feasibility of the synchronized multi-point vision-based system and partitioning approach, many laboratory tests were carried out on a three-story steel frame model. Furthermore, several tests were conducted on a five-story steel frame tower equipped with a hybrid mass damper to experimentally confirm the effectiveness of the proposed system. (paper)

  10. Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants.

    Science.gov (United States)

    Navarro, Pedro J; Pérez, Fernando; Weiss, Julia; Egea-Cortines, Marcos

    2016-05-05

    Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML) algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN), Naive Bayes Classifier (NBC), and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.

  11. Machine Vision Handbook

    CERN Document Server

    2012-01-01

    The automation of visual inspection is becoming more and more important in modern industry as a consistent, reliable means of judging the quality of raw materials and manufactured goods . The Machine Vision Handbook  equips the reader with the practical details required to engineer integrated mechanical-optical-electronic-software systems. Machine vision is first set in the context of basic information on light, natural vision, colour sensing and optics. The physical apparatus required for mechanized image capture – lenses, cameras, scanners and light sources – are discussed followed by detailed treatment of various image-processing methods including an introduction to the QT image processing system. QT is unique to this book, and provides an example of a practical machine vision system along with extensive libraries of useful commands, functions and images which can be implemented by the reader. The main text of the book is completed by studies of a wide variety of applications of machine vision in insp...

  12. Virtual expansion of the technical vision system for smart vehicles based on multi-agent cooperation model

    Science.gov (United States)

    Krapukhina, Nina; Senchenko, Roman; Kamenov, Nikolay

    2017-12-01

    Road safety and driving in dense traffic flows poses some challenges in receiving information about surrounding moving object, some of which can be in the vehicle's blind spot. This work suggests an approach to virtual monitoring of the objects in a current road scene via a system with a multitude of cooperating smart vehicles exchanging information. It also describes the intellectual agent model, and provides methods and algorithms of identifying and evaluating various characteristics of moving objects in video flow. Authors also suggest ways for integrating the information from the technical vision system into the model with further expansion of virtual monitoring for the system's objects. Implementation of this approach can help to expand the virtual field of view for a technical vision system.

  13. Stress Testing Water Resource Systems at Regional and National Scales with Synthetic Drought Event Sets

    Science.gov (United States)

    Hall, J. W.; Mortazavi-Naeini, M.; Coxon, G.; Guillod, B. P.; Allen, M. R.

    2017-12-01

    Water resources systems can fail to deliver the services required by water users (and deprive the environment of flow requirements) in many different ways. In an attempt to make systems more resilient, they have also been made more complex, for example through a growing number of large-scale transfers, optimized storages and reuse plants. These systems may be vulnerable to complex variants of hydrological variability in space and time, and behavioural adaptations by water users. In previous research we have used non-parametric stochastic streamflow generators to test the vulnerability of water resource systems. Here we use a very large ensemble of regional climate model outputs from the weather@home crowd-sourced citizen science project, which has generated more than 30,000 years of synthetic weather for present and future climates in the UK and western Europe, using the HadAM3P regional climate model. These simulations have been constructed in order to preserve prolonged drought characteristics, through treatment of long-memory processes in ocean circulations and soil moisture. The weather simulations have been propagated through the newly developed DynaTOP national hydrological for Britain, in order to provide low flow simulations at points of water withdrawal for public water supply, energy and agricultural abstractors. We have used the WATHNET water resource simulation model, set up for the Thames Basin and for all of the large water resource zones in England, to simulate the frequency, severity and duration of water shortages in all of these synthetic weather conditions. In particular, we have sought to explore systemic vulnerabilities associated with inter-basin transfers and the trade-offs between different water users. This analytical capability is providing the basis for (i) implementation of the Duty of Resilience, which has been placed upon the water industry in the 2014 Water Act and (ii) testing reformed abstraction arrangements which the UK government

  14. Fully synthetic phage-like system for screening mixtures of small molecules in live cells.

    Science.gov (United States)

    Byk, Gerardo; Partouche, Shirly; Weiss, Aryeh; Margel, Shlomo; Khandadash, Raz

    2010-05-10

    A synthetic "phage-like" system was designed for screening mixtures of small molecules in live cells. The core of the system consists of 2 mum diameter cross-linked monodispersed microspheres bearing a panel of fluorescent tags and peptides or small molecules either directly synthesized or covalently conjugated to the microspheres. The microsphere mixtures were screened for affinity to cell line PC-3 (prostate cancer model) by incubation with live cells, and as was with phage-display peptide methods, unbound microspheres were removed by repeated washings followed by total lysis of cells and analysis of the bound microspheres by flow-cytometry. Similar to phage-display peptide screening, this method can be applied even in the absence of prior information about the cellular targets of the candidate ligands, which makes the system especially interesting for selection of molecules with high affinity for desired cells, tissues, or tumors. The advantage of the proposed system is the possibility of screening synthetic non-natural peptides or small molecules that cannot be expressed and screened using phage display libraries. A library composed of small molecules synthesized by the Ugi reaction was screened, and a small molecule, Rak-2, which strongly binds to PC-3 cells was found. Rak-2 was then individually synthesized and validated in a complementary whole cell-based binding assay, as well as by live cell microscopy. This new system demonstrates that a mixture of molecules bound to subcellular sized microspheres can be screened on plated cells. Together with other methods using subcellular sized particles for cellular multiplexing, this method represents an important milestone toward high throughput screening of mixtures of small molecules in live cells and in vivo with potential applications in the fields of drug delivery and diagnostic imaging.

  15. Flight Testing of Night Vision Systems in Rotorcraft (Test en vol de systemes de vision nocturne a bord des aeronefs a voilure tournante)

    Science.gov (United States)

    2007-07-01

    Assessments 3-6 3.1.4 Measuring Signal to Noise Ratio 3-6 3.1.5 Measuring Modulation Transfer Function 3-6 3.1.6 Checks for Imaging Defects 3-8... Transfer Function NASA TLX National Aeronautics and Space Administration – Task Load Index NATO North Atlantic Treaty Organisation NRB Night...AGARDographe se limite au test des dispositifs de vision nocturne à amplification de lumière. Il ne traite pas des autres systèmes : imagerie thermique

  16. A vision-based tool for the control of hydraulic structures in sewer systems

    Science.gov (United States)

    Nguyen, L.; Sage, D.; Kayal, S.; Jeanbourquin, D.; Rossi, L.

    2009-04-01

    monitoring software has the following requirements: visual analysis of particular hydraulic behavior, automatic vision-based flow measurements, automatic alarm system for particular events (overflows, risk of flooding, etc), database for data management (images, events, measurements, etc.), ability to be controlled remotely. The software is implemented in modular server/client architecture under LabVIEW development system. We have conducted conclusive in situ tests in various sewers configurations (CSOs, storm-water sewerage, WWTP); they have shown the ability of the HydroPix to perform accurate monitoring of hydraulic structures. Visual information demonstrated a better understanding of the flow behavior in complex and difficult environment.

  17. Color machine vision system for process control in the ceramics industry

    Science.gov (United States)

    Penaranda Marques, Jose A.; Briones, Leoncio; Florez, Julian

    1997-08-01

    This paper is focused on the design of a machine vision system to solve a problem found in the manufacturing process of high quality polished porcelain tiles. This consists of sorting the tiles according to the criteria 'same appearance to the human eye' or in other words, by color and visual texture. In 1994 this problem was tackled and led to a prototype which became fully operational at production scale in a manufacturing plant, named Porcelanatto, S.A. The system has evolved and has been adapted to meet the particular needs of this manufacturing company. Among the main issues that have been improved, it is worth pointing out: (1) improvement to discern subtle variations in color or texture, which are the main features of the visual appearance; (2) inspection time reduction, as a result of algorithm optimization and the increasing computing power. Thus, 100 percent of the production can be inspected, reaching a maximum of 120 tiles/sec.; (3) adaptation to the different types and models of tiles manufactured. The tiles vary not only in their visible patterns but also in dimensions, formats, thickness and allowances. In this sense, one major problem has been reaching an optimal compromise: The system must be sensitive enough to discern subtle variations in color, but at the same time insensitive thickness variations in the tiles. The following parts have been used to build the system: RGB color line scan camera, 12 bits per channel, PCI frame grabber, PC, fiber optic based illumination and the algorithm which will be explained in section 4.

  18. A real-time vision-based hand gesture interaction system for virtual EAST

    Energy Technology Data Exchange (ETDEWEB)

    Wang, K.R., E-mail: wangkr@mail.ustc.edu.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Xiao, B.J.; Xia, J.Y. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Li, Dan [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Luo, W.L. [709th Research Institute, Shipbuilding Industry Corporation (China)

    2016-11-15

    Highlights: • Hand gesture interaction is first introduced to EAST model interaction. • We can interact with EAST model by a bared hand and a web camera. • We can interact with EAST model with a distance to screen. • Interaction is free, direct and effective. - Abstract: The virtual Experimental Advanced Superconducting Tokamak device (VEAST) is a very complicated 3D model, to interact with which, the traditional interaction devices are limited and inefficient. However, with the development of human-computer interaction (HCI), the hand gesture interaction has become a much popular choice in recent years. In this paper, we propose a real-time vision-based hand gesture interaction system for VEAST. By using one web camera, we can use our bare hand to interact with VEAST at a certain distance, which proves to be more efficient and direct than mouse. The system is composed of four modules: initialization, hand gesture recognition, interaction control and system settings. The hand gesture recognition method is based on codebook (CB) background modeling and open finger counting. Firstly, we build a background model with CB algorithm. Then, we segment the hand region by detecting skin color regions with “elliptical boundary model” in CbCr flat of YCbCr color space. Open finger which is used as a key feature of gesture can be tracked by an improved curvature-based method. Based on the method, we define nine gestures for interaction control of VEAST. Finally, we design a test to demonstrate effectiveness of our system.

  19. A real-time vision-based hand gesture interaction system for virtual EAST

    International Nuclear Information System (INIS)

    Wang, K.R.; Xiao, B.J.; Xia, J.Y.; Li, Dan; Luo, W.L.

    2016-01-01

    Highlights: • Hand gesture interaction is first introduced to EAST model interaction. • We can interact with EAST model by a bared hand and a web camera. • We can interact with EAST model with a distance to screen. • Interaction is free, direct and effective. - Abstract: The virtual Experimental Advanced Superconducting Tokamak device (VEAST) is a very complicated 3D model, to interact with which, the traditional interaction devices are limited and inefficient. However, with the development of human-computer interaction (HCI), the hand gesture interaction has become a much popular choice in recent years. In this paper, we propose a real-time vision-based hand gesture interaction system for VEAST. By using one web camera, we can use our bare hand to interact with VEAST at a certain distance, which proves to be more efficient and direct than mouse. The system is composed of four modules: initialization, hand gesture recognition, interaction control and system settings. The hand gesture recognition method is based on codebook (CB) background modeling and open finger counting. Firstly, we build a background model with CB algorithm. Then, we segment the hand region by detecting skin color regions with “elliptical boundary model” in CbCr flat of YCbCr color space. Open finger which is used as a key feature of gesture can be tracked by an improved curvature-based method. Based on the method, we define nine gestures for interaction control of VEAST. Finally, we design a test to demonstrate effectiveness of our system.

  20. Tomato grading system using machine vision technology and neuro-fuzzy networks (ANFIS

    Directory of Open Access Journals (Sweden)

    H Izadi

    2016-04-01

    Full Text Available Introduction: The quality of agricultural products is associated with their color, size and health, grading of fruits is regarded as an important step in post-harvest processing. In most cases, manual sorting inspections depends on available manpower, time consuming and their accuracy could not be guaranteed. Machine Vision is known to be a useful tool for external features measurement (e.g. size, shape, color and defects and in recent century, Machine Vision technology has been used for shape sorting. The main purpose of this study was to develop new method for tomato grading and sorting using Neuro-fuzzy system (ANFIS and to compare the accuracies of the ANFIS predicted results with those suggested by a human expert. Materials and Methods: In this study, a total of 300 image of tomatoes (Rev ground was randomly harvested, classified in 3 ripeness stage, 3 sizes and 2 health. The grading and sorting mechanism consisted of a lighting chamber (cloudy sky, lighting source and a digital camera connected to a computer. The images were recorded in a special chamber with an indirect radiation (cloudy sky with four florescent lampson each sides and camera lens was entire to lighting chamber by a hole which was only entranced to outer and covered by a camera lens. Three types of features were extracted from final images; Shap, color and texture. To receive these features, we need to have images both in color and binary format in procedure shown in Figure 1. For the first group; characteristics of the images were analysis that could offer information an surface area (S.A., maximum diameter (Dmax, minimum diameter (Dmin and average diameters. Considering to the importance of the color in acceptance of food quality by consumers, the following classification was conducted to estimate the apparent color of the tomato; 1. Classified as red (red > 90% 2. Classified as red light (red or bold pink 60-90% 3. Classified as pink (red 30-60% 4. Classified as Turning

  1. Mechatronic Development and Vision Feedback Control of a Nanorobotics Manipulation System inside SEM for Nanodevice Assembly

    Directory of Open Access Journals (Sweden)

    Zhan Yang

    2016-09-01

    Full Text Available Carbon nanotubes (CNT have been developed in recent decades for nanodevices such as nanoradios, nanogenerators, carbon nanotube field effect transistors (CNTFETs and so on, indicating that the application of CNTs for nanoscale electronics may play a key role in the development of nanotechnology. Nanorobotics manipulation systems are a promising method for nanodevice construction and assembly. For the purpose of constructing three-dimensional CNTFETs, a nanorobotics manipulation system with 16 DOFs was developed for nanomanipulation of nanometer-scale objects inside the specimen chamber of a scanning electron microscope (SEM. Nanorobotics manipulators are assembled into four units with four DOFs (X-Y-Z-θ individually. The rotational one is actuated by a picomotor. That means a manipulator has four DOFs including three linear motions in the X, Y, Z directions and a 360-degree rotational one (X-Y-Z-θ stage, θ is along the direction rotating with X or Y axis. Manipulators are actuated by picomotors with better than 30 nm linear resolution and <1 micro-rad rotary resolution. Four vertically installed AFM cantilevers (the axis of the cantilever tip is vertical to the axis of electronic beam of SEM served as the end-effectors to facilitate the real-time observation of the operations. A series of kinematic derivations of these four manipulators based on the Denavit-Hartenberg (D-H notation were established. The common working space of the end-effectors is 2.78 mm by 4.39 mm by 6 mm. The manipulation strategy and vision feedback control for multi-manipulators operating inside the SEM chamber were been discussed. Finally, application of the designed nanorobotics manipulation system by successfully testing of the pickup-and-place manipulation of an individual CNT onto four probes was described. The experimental results have shown that carbon nanotubes can be successfully picked up with this nanorobotics manipulation system.

  2. Mechatronic Development and Vision Feedback Control of a Nanorobotics Manipulation System inside SEM for Nanodevice Assembly.

    Science.gov (United States)

    Yang, Zhan; Wang, Yaqiong; Yang, Bin; Li, Guanghui; Chen, Tao; Nakajima, Masahiro; Sun, Lining; Fukuda, Toshio

    2016-09-14

    Carbon nanotubes (CNT) have been developed in recent decades for nanodevices such as nanoradios, nanogenerators, carbon nanotube field effect transistors (CNTFETs) and so on, indicating that the application of CNTs for nanoscale electronics may play a key role in the development of nanotechnology. Nanorobotics manipulation systems are a promising method for nanodevice construction and assembly. For the purpose of constructing three-dimensional CNTFETs, a nanorobotics manipulation system with 16 DOFs was developed for nanomanipulation of nanometer-scale objects inside the specimen chamber of a scanning electron microscope (SEM). Nanorobotics manipulators are assembled into four units with four DOFs (X-Y-Z-θ) individually. The rotational one is actuated by a picomotor. That means a manipulator has four DOFs including three linear motions in the X, Y, Z directions and a 360-degree rotational one (X-Y-Z-θ stage, θ is along the direction rotating with X or Y axis). Manipulators are actuated by picomotors with better than 30 nm linear resolution and <1 micro-rad rotary resolution. Four vertically installed AFM cantilevers (the axis of the cantilever tip is vertical to the axis of electronic beam of SEM) served as the end-effectors to facilitate the real-time observation of the operations. A series of kinematic derivations of these four manipulators based on the Denavit-Hartenberg (D-H) notation were established. The common working space of the end-effectors is 2.78 mm by 4.39 mm by 6 mm. The manipulation strategy and vision feedback control for multi-manipulators operating inside the SEM chamber were been discussed. Finally, application of the designed nanorobotics manipulation system by successfully testing of the pickup-and-place manipulation of an individual CNT onto four probes was described. The experimental results have shown that carbon nanotubes can be successfully picked up with this nanorobotics manipulation system.

  3. Mechatronic Development and Vision Feedback Control of a Nanorobotics Manipulation System inside SEM for Nanodevice Assembly

    Science.gov (United States)

    Yang, Zhan; Wang, Yaqiong; Yang, Bin; Li, Guanghui; Chen, Tao; Nakajima, Masahiro; Sun, Lining; Fukuda, Toshio

    2016-01-01

    Carbon nanotubes (CNT) have been developed in recent decades for nanodevices such as nanoradios, nanogenerators, carbon nanotube field effect transistors (CNTFETs) and so on, indicating that the application of CNTs for nanoscale electronics may play a key role in the development of nanotechnology. Nanorobotics manipulation systems are a promising method for nanodevice construction and assembly. For the purpose of constructing three-dimensional CNTFETs, a nanorobotics manipulation system with 16 DOFs was developed for nanomanipulation of nanometer-scale objects inside the specimen chamber of a scanning electron microscope (SEM). Nanorobotics manipulators are assembled into four units with four DOFs (X-Y-Z-θ) individually. The rotational one is actuated by a picomotor. That means a manipulator has four DOFs including three linear motions in the X, Y, Z directions and a 360-degree rotational one (X-Y-Z-θ stage, θ is along the direction rotating with X or Y axis). Manipulators are actuated by picomotors with better than 30 nm linear resolution and SEM) served as the end-effectors to facilitate the real-time observation of the operations. A series of kinematic derivations of these four manipulators based on the Denavit-Hartenberg (D-H) notation were established. The common working space of the end-effectors is 2.78 mm by 4.39 mm by 6 mm. The manipulation strategy and vision feedback control for multi-manipulators operating inside the SEM chamber were been discussed. Finally, application of the designed nanorobotics manipulation system by successfully testing of the pickup-and-place manipulation of an individual CNT onto four probes was described. The experimental results have shown that carbon nanotubes can be successfully picked up with this nanorobotics manipulation system. PMID:27649180

  4. Integration of differential global positioning system with ultrawideband synthetic aperture radar for forward imaging

    Science.gov (United States)

    Wong, David C.; Bui, Khang; Nguyen, Lam H.; Smith, Gregory; Ton, Tuan T.

    2003-09-01

    The U.S. Army Research Laboratory (ARL), as part of a customer and mission-funded exploratory development program, has been evaluating low-frequency, ultra-wideband (UWB) imaging radar for forward imaging to support the Army's vision for increased mobility and survivability of unmanned ground vehicle missions. As part of the program to improve the radar system and imaging capability, ARL has incorporated a differential global positioning system (DGPS) for motion compensation into the radar system. The use of DGPS can greatly increase positional accuracy, thereby allowing us to improve our ability to focus better images for the detection of small targets such as plastic mines and other concealed objects buried underground. The ability of UWB radar technology to detect concealed objects could provide an important obstacle avoidance capability for robotic vehicles, which would improve the speed and maneuverability of these vehicles and consequently increase the survivability of the U.S. forces. This paper details the integration and discusses the significance of integrating a DGPS into the radar system for forward imaging. It also compares the difference between DGPS and the motion compensation data collected by the use of the original theodolite-based system.

  5. LES SOFTWARE FOR THE DESIGN OF LOW EMISSION COMBUSTION SYSTEMS FOR VISION 21 PLANTS

    Energy Technology Data Exchange (ETDEWEB)

    Clifford E. Smith; Steven M. Cannon; Virgil Adumitroaie; David L. Black; Karl V. Meredith

    2005-01-01

    In this project, an advanced computational software tool was developed for the design of low emission combustion systems required for Vision 21 clean energy plants. Vision 21 combustion systems, such as combustors for gas turbines, combustors for indirect fired cycles, furnaces and sequestrian-ready combustion systems, will require innovative low emission designs and low development costs if Vision 21 goals are to be realized. The simulation tool will greatly reduce the number of experimental tests; this is especially desirable for gas turbine combustor design since the cost of the high pressure testing is extremely costly. In addition, the software will stimulate new ideas, will provide the capability of assessing and adapting low-emission combustors to alternate fuels, and will greatly reduce the development time cycle of combustion systems. The revolutionary combustion simulation software is able to accurately simulate the highly transient nature of gaseous-fueled (e.g. natural gas, low BTU syngas, hydrogen, biogas etc.) turbulent combustion and assess innovative concepts needed for Vision 21 plants. In addition, the software is capable of analyzing liquid-fueled combustion systems since that capability was developed under a concurrent Air Force Small Business Innovative Research (SBIR) program. The complex physics of the reacting flow field are captured using 3D Large Eddy Simulation (LES) methods, in which large scale transient motion is resolved by time-accurate numerics, while the small scale motion is modeled using advanced subgrid turbulence and chemistry closures. In this way, LES combustion simulations can model many physical aspects that, until now, were impossible to predict with 3D steady-state Reynolds Averaged Navier-Stokes (RANS) analysis, i.e. very low NOx emissions, combustion instability (coupling of unsteady heat and acoustics), lean blowout, flashback, autoignition, etc. LES methods are becoming more and more practical by linking together tens

  6. Synthetic biology with artificially expanded genetic information systems. From personalized medicine to extraterrestrial life.

    Science.gov (United States)

    Benner, Steven A; Hutter, Daniel; Sismour, A Michael

    2003-01-01

    Over 15 years ago, the Benner group noticed that the DNA alphabet need not be limited to the four standard nucleotides known in natural DNA. Rather, twelve nucleobases forming six base pairs joined by mutually exclusive hydrogen bonding patterns are possible within the geometry of the Watson-Crick pair (Fig. 1). Synthesis and studies on these compounds have brought us to the threshold of a synthetic biology, an artificial chemical system that does basic processes needed for life (in particular, Darwinian evolution), but with unnatural chemical structures. At the same time, the artificial genetic information systems (AEGIS) that we have developed have been used in FDA-approved commercial tests for managing HIV and hepatitis C infections in individual patients, and in a tool that seeks the virus for severe acute respiratory syndrome (SARS). AEGIS also supports the next generation of robotic probes to search for genetic molecules on Mars, Europa, and elsewhere where NASA probes will travel.

  7. Prediction of pork loin quality using online computer vision system and artificial intelligence model.

    Science.gov (United States)

    Sun, Xin; Young, Jennifer; Liu, Jeng-Hung; Newman, David

    2018-06-01

    The objective of this project was to develop a computer vision system (CVS) for objective measurement of pork loin under industry speed requirement. Color images of pork loin samples were acquired using a CVS. Subjective color and marbling scores were determined according to the National Pork Board standards by a trained evaluator. Instrument color measurement and crude fat percentage were used as control measurements. Image features (18 color features; 1 marbling feature; 88 texture features) were extracted from whole pork loin color images. Artificial intelligence prediction model (support vector machine) was established for pork color and marbling quality grades. The results showed that CVS with support vector machine modeling reached the highest prediction accuracy of 92.5% for measured pork color score and 75.0% for measured pork marbling score. This research shows that the proposed artificial intelligence prediction model with CVS can provide an effective tool for predicting color and marbling in the pork industry at online speeds. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. PARCEL DELIVERY IN AN URBAN ENVIRONMENT USING UNMANNED AERIAL SYSTEMS: A VISION PAPER

    Directory of Open Access Journals (Sweden)

    B. Anbaroğlu

    2017-11-01

    Full Text Available This vision paper addresses the challenges and explores the avenue of solutions regarding the use of Unmanned Aerial Systems (UAS for transporting parcels in urban areas. We have already witnessed companies’ delivering parcels using UAS in rural areas, but the challenge of utilizing them for an urban environment is eminent. Nevertheless, the increasing research on the various aspects of UAS, including their battery life, resistance to harsh weather conditions and sensing its environment foresee their common usage in the logistics industry, especially in an urban environment. In addition, the increasing trend on 3D city modelling offer new directions regarding realistic as well as light 3D city models that are easy to modify and distribute. Utilizing UAS for transporting parcels in an urban environment would be a disruptive technological achievement as our roads will be less congested which would lead to less air pollution as well as wasted money and time. In addition, parcels could potentially be delivered much faster. This paper argues, with the support of the state-of-the-art research, that UASs will be used for transporting parcels in an urban environment in the coming decades.

  9. Parcel Delivery in AN Urban Environment Using Unmanned Aerial Systems: a Vision Paper

    Science.gov (United States)

    Anbaroğlu, B.

    2017-11-01

    This vision paper addresses the challenges and explores the avenue of solutions regarding the use of Unmanned Aerial Systems (UAS) for transporting parcels in urban areas. We have already witnessed companies' delivering parcels using UAS in rural areas, but the challenge of utilizing them for an urban environment is eminent. Nevertheless, the increasing research on the various aspects of UAS, including their battery life, resistance to harsh weather conditions and sensing its environment foresee their common usage in the logistics industry, especially in an urban environment. In addition, the increasing trend on 3D city modelling offer new directions regarding realistic as well as light 3D city models that are easy to modify and distribute. Utilizing UAS for transporting parcels in an urban environment would be a disruptive technological achievement as our roads will be less congested which would lead to less air pollution as well as wasted money and time. In addition, parcels could potentially be delivered much faster. This paper argues, with the support of the state-of-the-art research, that UASs will be used for transporting parcels in an urban environment in the coming decades.

  10. Complex IoT Systems as Enablers for Smart Homes in a Smart City Vision.

    Science.gov (United States)

    Lynggaard, Per; Skouby, Knud Erik

    2016-11-02

    The world is entering a new era, where Internet-of-Things (IoT), smart homes, and smart cities will play an important role in meeting the so-called big challenges. In the near future, it is foreseen that the majority of the world's population will live their lives in smart homes and in smart cities. To deal with these challenges, to support a sustainable urban development, and to improve the quality of life for citizens, a multi-disciplinary approach is needed. It seems evident, however, that a new, advanced Information and Communications Technology ICT infrastructure is a key feature to realize the "smart" vision. This paper proposes a specific solution in the form of a hierarchical layered ICT based infrastructure that handles ICT issues related to the "big challenges" and seamlessly integrates IoT, smart homes, and smart city structures into one coherent unit. To exemplify benefits of this infrastructure, a complex IoT system has been deployed, simulated and elaborated. This simulation deals with wastewater energy harvesting from smart buildings located in a smart city context. From the simulations, it has been found that the proposed infrastructure is able to harvest between 50% and 75% of the wastewater energy in a smart residential building. By letting the smart city infrastructure coordinate and control the harvest time and duration, it is possible to achieve considerable energy savings in the smart homes, and it is possible to reduce the peak-load for district heating plants.

  11. Comparative morphometry of facial surface models obtained from a stereo vision system in a healthy population

    Science.gov (United States)

    López, Leticia; Gastélum, Alfonso; Chan, Yuk Hin; Delmas, Patrice; Escorcia, Lilia; Márquez, Jorge

    2014-11-01

    Our goal is to obtain three-dimensional measurements of craniofacial morphology in a healthy population, using standard landmarks established by a physical-anthropology specialist and picked from computer reconstructions of the face of each subject. To do this, we designed a multi-stereo vision system that will be used to create a data base of human faces surfaces from a healthy population, for eventual applications in medicine, forensic sciences and anthropology. The acquisition process consists of obtaining the depth map information from three points of views, each depth map is obtained from a calibrated pair of cameras. The depth maps are used to build a complete, frontal, triangular-surface representation of the subject face. The triangular surface is used to locate the landmarks and the measurements are analyzed with a MATLAB script. The classification of the subjects was done with the aid of a specialist anthropologist that defines specific subject indices, according to the lengths, areas, ratios, etc., of the different structures and the relationships among facial features. We studied a healthy population and the indices from this population will be used to obtain representative averages that later help with the study and classification of possible pathologies.

  12. Drogue pose estimation for unmanned aerial vehicle autonomous aerial refueling system based on infrared vision sensor

    Science.gov (United States)

    Chen, Shanjun; Duan, Haibin; Deng, Yimin; Li, Cong; Zhao, Guozhi; Xu, Yan

    2017-12-01

    Autonomous aerial refueling is a significant technology that can significantly extend the endurance of unmanned aerial vehicles. A reliable method that can accurately estimate the position and attitude of the probe relative to the drogue is the key to such a capability. A drogue pose estimation method based on infrared vision sensor is introduced with the general goal of yielding an accurate and reliable drogue state estimate. First, by employing direct least squares ellipse fitting and convex hull in OpenCV, a feature point matching and interference point elimination method is proposed. In addition, considering the conditions that some infrared LEDs are damaged or occluded, a missing point estimation method based on perspective transformation and affine transformation is designed. Finally, an accurate and robust pose estimation algorithm improved by the runner-root algorithm is proposed. The feasibility of the designed visual measurement system is demonstrated by flight test, and the results indicate that our proposed method enables precise and reliable pose estimation of the probe relative to the drogue, even in some poor conditions.

  13. A New Approach to Spindle Radial Error Evaluation Using a Machine Vision System

    Directory of Open Access Journals (Sweden)

    Kavitha C.

    2017-03-01

    Full Text Available The spindle rotational accuracy is one of the important issues in a machine tool which affects the surface topography and dimensional accuracy of a workpiece. This paper presents a machine-vision-based approach to radial error measurement of a lathe spindle using a CMOS camera and a PC-based image processing system. In the present work, a precisely machined cylindrical master is mounted on the spindle as a datum surface and variations of its position are captured using the camera for evaluating runout of the spindle. The Circular Hough Transform (CHT is used to detect variations of the centre position of the master cylinder during spindle rotation at subpixel level from a sequence of images. Radial error values of the spindle are evaluated using the Fourier series analysis of the centre position of the master cylinder calculated with the least squares curve fitting technique. The experiments have been carried out on a lathe at different operating speeds and the spindle radial error estimation results are presented. The proposed method provides a simpler approach to on-machine estimation of the spindle radial error in machine tools.

  14. Enhanced control of a flexure-jointed micromanipulation system using a vision-based servoing approach

    Science.gov (United States)

    Chuthai, T.; Cole, M. O. T.; Wongratanaphisan, T.; Puangmali, P.

    2018-01-01

    This paper describes a high-precision motion control implementation for a flexure-jointed micromanipulator. A desktop experimental motion platform has been created based on a 3RUU parallel kinematic mechanism, driven by rotary voice coil actuators. The three arms supporting the platform have rigid links with compact flexure joints as integrated parts and are made by single-process 3D printing. The mechanism overall size is approximately 250x250x100 mm. The workspace is relatively large for a flexure-jointed mechanism, being approximately 20x20x6 mm. A servo-control implementation based on pseudo-rigid-body models (PRBM) of kinematic behavior combined with nonlinear-PID control has been developed. This is shown to achieve fast response with good noise-rejection and platform stability. However, large errors in absolute positioning occur due to deficiencies in the PRBM kinematics, which cannot accurately capture flexure compliance behavior. To overcome this problem, visual servoing is employed, where a digital microscopy system is used to directly measure the platform position by image processing. By adopting nonlinear PID feedback of measured angles for the actuated joints as inner control loops, combined with auxiliary feedback of vision-based measurements, the absolute positioning error can be eliminated. With controller gain tuning, fast dynamic response and low residual vibration of the end platform can be achieved with absolute positioning accuracy within ±1 micron.

  15. Examples of design and achievement of vision systems for mobile robotics applications

    Science.gov (United States)

    Bonnin, Patrick J.; Cabaret, Laurent; Raulet, Ludovic; Hugel, Vincent; Blazevic, Pierre; M'Sirdi, Nacer K.; Coiffet, Philippe

    2000-10-01

    Our goal is to design and to achieve a multiple purpose vision system for various robotics applications : wheeled robots (like cars for autonomous driving), legged robots (six, four (SONY's AIBO) legged robots, and humanoid), flying robots (to inspect bridges for example) in various conditions : indoor or outdoor. Considering that the constraints depend on the application, we propose an edge segmentation implemented either in software, or in hardware using CPLDs (ASICs or FPGAs could be used too). After discussing the criteria of our choice, we propose a chain of image processing operators constituting an edge segmentation. Although this chain is quite simple and very fast to perform, results appear satisfactory. We proposed a software implementation of it. Its temporal optimization is based on : its implementation under the pixel data flow programming model, the gathering of local processing when it is possible, the simplification of computations, and the use of fast access data structures. Then, we describe a first dedicated hardware implementation of the first part, which requires 9CPLS in this low cost version. It is technically possible, but more expensive, to implement these algorithms using only a signle FPGA.

  16. Vision-based fall detection system for improving safety of elderly people

    KAUST Repository

    Harrou, Fouzi

    2017-12-06

    Recognition of human movements is very useful for several applications, such as smart rooms, interactive virtual reality systems, human detection and environment modeling. The objective of this work focuses on the detection and classification of falls based on variations in human silhouette shape, a key challenge in computer vision. Falls are a major health concern, specifically for the elderly. In this study, the detection is achieved with a multivariate exponentially weighted moving average (MEWMA) monitoring scheme, which is effective in detecting falls because it is sensitive to small changes. Unfortunately, an MEWMA statistic fails to differentiate real falls from some fall-like gestures. To remedy this limitation, a classification stage based on a support vector machine (SVM) is applied on detected sequences. To validate this methodology, two fall detection datasets have been tested: the University of Rzeszow fall detection dataset (URFD) and the fall detection dataset (FDD). The results of the MEWMA-based SVM are compared with three other classifiers: neural network (NN), naïve Bayes and K-nearest neighbor (KNN). These results show the capability of the developed strategy to distinguish fall events, suggesting that it can raise an early alert in the fall incidents.

  17. NETRA: A parallel architecture for integrated vision systems 2: Algorithms and performance evaluation

    Science.gov (United States)

    Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra

    1989-01-01

    In part 1 architecture of NETRA is presented. A performance evaluation of NETRA using several common vision algorithms is also presented. Performance of algorithms when they are mapped on one cluster is described. It is shown that SIMD, MIMD, and systolic algorithms can be easily mapped onto processor clusters, and almost linear speedups are possible. For some algorithms, analytical performance results are compared with implementation performance results. It is observed that the analysis is very accurate. Performance analysis of parallel algorithms when mapped across clusters is presented. Mappings across clusters illustrate the importance and use of shared as well as distributed memory in achieving high performance. The parameters for evaluation are derived from the characteristics of the parallel algorithms, and these parameters are used to evaluate the alternative communication strategies in NETRA. Furthermore, the effect of communication interference from other processors in the system on the execution of an algorithm is studied. Using the analysis, performance of many algorithms with different characteristics is presented. It is observed that if communication speeds are matched with the computation speeds, good speedups are possible when algorithms are mapped across clusters.

  18. Motion-Base Simulator Evaluation of an Aircraft Using an External Vision System

    Science.gov (United States)

    Kramer, Lynda J.; Williams, Steven P.; Arthur, J. J.; Rehfeld, Sherri A.; Harrison, Stephanie

    2012-01-01

    Twelve air transport-rated pilots participated as subjects in a motion-base simulation experiment to evaluate the use of eXternal Vision Systems (XVS) as enabling technologies for future supersonic aircraft without forward facing windows. Three head-up flight display concepts were evaluated -a monochromatic, collimated Head-up Display (HUD) and a color, non-collimated XVS display with a field-of-view (FOV) equal to and also, one significantly larger than the collimated HUD. Approach, landing, departure, and surface operations were conducted. Additionally, the apparent angle-of-attack (AOA) was varied (high/low) to investigate the vertical field-of-view display requirements and peripheral, side window visibility was experimentally varied. The data showed that lateral approach tracking performance and lateral landing position were excellent regardless of AOA, display FOV, display collimation or whether peripheral cues were present. However, the data showed glide slope approach tracking appears to be affected by display size (i.e., FOV) and collimation. The monochrome, collimated HUD and color, uncollimated XVS with Full FOV display had (statistically equivalent) glide path performance improvements over the XVS with HUD FOV display. Approach path performance results indicated that collimation may not be a requirement for an XVS display if the XVS display is large enough and employs color. Subjective assessments of mental workload and situation awareness also indicated that an uncollimated XVS display may be feasible. Motion cueing appears to have improved localizer tracking and touchdown sink rate across all displays.

  19. A General Cognitive System Architecture Based on Dynamic Vision for Motion Control

    Directory of Open Access Journals (Sweden)

    Ernst D. Dickmanns

    2003-10-01

    Full Text Available Animation of spatio-temporal generic models for 3-D shape and motion of objects and subjects, based on feature sets evaluated in parallel from several image streams, is considered to be the core of dynamic vision. Subjects are a special kind of objects capable of sensing environmental parameters and of initiating own actions in combination with stored knowledge. Object / subject recognition and scene understanding are achieved on different levels and scales. Multiple objects are tracked individually in the image streams for perceiving their actual state ('here and now'. By analyzing motion of all relevant objects / subjects over a larger time scale on the level of state variables in the 'scene tree representation' known from computer graphics, the situation with respect to decision taking is assessed. Behavioral capabilities of subjects are represented explicitly on an abstract level for characterizing their potential behaviors. These are generated by stereotypical feed-forward and feedback control applications on a separate systems dynamics level with corresponding methods close to the actuator hardware. This dual representation on an abstract level (for decision making and on the implementation level allows for flexibility and easy adaptation or extension. Results are shown for road vehicle guidance based on three cameras on a gaze control platform.

  20. Synthetic jets based on micro magneto mechanical systems for aerodynamic flow control

    International Nuclear Information System (INIS)

    Gimeno, L; Merlen, A; Talbi, A; Viard, R; Pernod, P; Preobrazhensky, V

    2010-01-01

    A magneto-mechanical micro-actuator providing an axisymmetric synthetic microjet for active flow control was designed, fabricated and characterized. The micro-actuator consists of an enclosed cavity with a small orifice in one face and a high flexible elastomeric (PDMS) membrane in the opposite one. The membrane vibration is achieved using a magnetic actuation chosen for its capacity for providing large out of plane displacements and forces necessary for the performances aimed for. The paper presents first numerical simulations of the flow performed during the design process in order to identify a general jet formation criterion and optimize the device's performances. The fabrication process of this micro-magneto-mechanical system (MMMS) is then briefly described. The full size of the device, including packaging and actuation, does not exceed 1 cm 3 . The evaluation of the performances of the synthetic jet with 600 µm orifice was performed. The results show that the optimum working point is in the frequency range 400–700 Hz which is in accordance with the frequency response of the magnet-membrane mechanical resonator. In this frequency range, the microjet reaches maximum speeds ranging from 25 m s −1 to 55 m s −1 for an electromagnetic power consumption of 500 mW. Finally the axial velocity transient and stream-wise behaviours in the near and far fields are reported and discussed.

  1. A computer vision system for rapid search inspired by surface-based attention mechanisms from human perception.

    Science.gov (United States)

    Mohr, Johannes; Park, Jong-Han; Obermayer, Klaus

    2014-12-01

    Humans are highly efficient at visual search tasks by focusing selective attention on a small but relevant region of a visual scene. Recent results from biological vision suggest that surfaces of distinct physical objects form the basic units of this attentional process. The aim of this paper is to demonstrate how such surface-based attention mechanisms can speed up a computer vision system for visual search. The system uses fast perceptual grouping of depth cues to represent the visual world at the level of surfaces. This representation is stored in short-term memory and updated over time. A top-down guided attention mechanism sequentially selects one of the surfaces for detailed inspection by a recognition module. We show that the proposed attention framework requires little computational overhead (about 11 ms), but enables the system to operate in real-time and leads to a substantial increase in search efficiency. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Anchoring visions in organizations

    DEFF Research Database (Denmark)

    Simonsen, Jesper

    1999-01-01

    This paper introduces the term 'anchoring' within systems development: Visions, developed through early systems design within an organization, need to be deeply rooted in the organization. A vision's rationale needs to be understood by those who decide if the vision should be implemented as well...... as by those involved in the actual implementation. A model depicting a recent trend within systems development is presented: Organizations rely on purchasing generic software products and/or software development outsourced to external contractors. A contemporary method for participatory design, where...

  3. 3-D synthetic aperture processing on high-frequency wide-beam microwave systems

    Science.gov (United States)

    Cristofani, Edison; Brook, Anna; Vandewal, Marijke

    2012-06-01

    The use of High-Frequency MicroWaves (HFMW) for high-resolution imagery has gained interest over the last years. Very promising in-depth applications can be foreseen for composite non-metal, non-polarized materials, widely used in the aeronautic and aerospace industries. Most of these materials present a high transparency in the HFMW range and, therefore, defects, delaminations or occlusions within the material can be located. This property can be exploited by applying 3-D HFMW imaging where conventional focused imaging systems are typically used but a different approach such as Synthetic Aperture (SA) radar can be addressed. This paper will present an end-to-end 3-D imagery system for short-range, non-destructive testing based on a frequency-modulated continuous-wave HFMWsensor operating at 100 GHz, implying no health concerns to the human body as well as relatively low cost and limited power requirements. The sensor scans the material while moving sequentially in every elevation plane following a 2-D grid and uses a significantly wide beam antenna for data acquisition, in contrast to focused systems. Collected data must be coherently combined using a SA algorithm to form focused images. Range-independent, synthetically improved cross-range resolutions are remarkable added values of SA processing. Such algorithms can be found in the literature and operate in the time or frequency domains, being the former computationally impractical and the latter the best option for in-depth 3-D imaging. A balanced trade-off between performance and image focusing quality is investigated for several SA algorithms.

  4. Controlled sliding of logs downhill by chute system integrated with portable winch and synthetic rope

    Directory of Open Access Journals (Sweden)

    Neşe Gülci

    2016-01-01

    Full Text Available Over 80% of wood extraction operations have been performed by conventional methods in Turkey. Conventional methods include skidding or sliding of logs mainly by man and animal power, which poses problems in terms of technical, economical, environmental, and ergonomic aspects. Skidding wood on plastic chutes has been implemented in limited numbers of logging applications in recent years, and provides important advantages such as reducing environmental damages and minimizing the value and volume loss of transported wood products. In this study, a chute system integrated with a mobile winch was developed for controlled sliding of large diameter logs downhill. In addition, synthetic ropes rather than steel cables were used to pull log products, resulting in a lower weight and more efficient extraction system. The system was tested on a sample wood production operation in Çınarpınar Forest Enterprise Chief of Kahramanmaraş Forest Enterprise Directorate. In the study, productivity analysis of chute system was performed and its ecological impacts were evaluated. During controlled sliding of logs downhill, the highest productivity (10.01 m3/hour was reached in the fourth chute system characterized as 36 m in length and 70% ground slope. One of the main factors that affected the productivity of chute system was the controlled sliding time of the logs. It was found that residual stand damage was very limited during controlled sliding operations.

  5. Cognitive Vision and Perceptual Grouping by Production Systems with Blackboard Control - An Example for High-Resolution SAR-Images

    Science.gov (United States)

    Michaelsen, Eckart; Middelmann, Wolfgang; Sörgel, Uwe

    The laws of gestalt-perception play an important role in human vision. Psychological studies identified similarity, good continuation, proximity and symmetry as important inter-object relations that distinguish perceptive gestalts from arbitrary sets of clutter objects. Particularly, symmetry and continuation possess a high potential in detection, identification, and reconstruction of man-made objects. This contribution focuses on coding this principle in an automatic production system. Such systems capture declarative knowledge. Procedural details are defined as control strategy for an interpreter. Often an exact solution is not feasible while approximately correct interpretations of the data with the production system are sufficient. Given input data and a production system the control acts accumulatively instead of reducing. The approach is assessment driven features any-time capability and fits well into the recently discussed paradigms of cognitive vision. An example from automatic extraction of groupings and symmetry in man-made structure from high resolution SAR-image data is given. The contribution also discusses the relations of such approach to the "mid-level" of what is today proposed as "cognitive vision".

  6. Property-driven functional verification technique for high-speed vision system-on-chip processor

    Science.gov (United States)

    Nshunguyimfura, Victor; Yang, Jie; Liu, Liyuan; Wu, Nanjian

    2017-04-01

    The implementation of functional verification in a fast, reliable, and effective manner is a challenging task in a vision chip verification process. The main reason for this challenge is the stepwise nature of existing functional verification techniques. This vision chip verification complexity is also related to the fact that in most vision chip design cycles, extensive efforts are focused on how to optimize chip metrics such as performance, power, and area. Design functional verification is not explicitly considered at an earlier stage at which the most sound decisions are made. In this paper, we propose a semi-automatic property-driven verification technique. The implementation of all verification components is based on design properties. We introduce a low-dimension property space between the specification space and the implementation space. The aim of this technique is to speed up the verification process for high-performance parallel processing vision chips. Our experimentation results show that the proposed technique can effectively improve the verification effort up to 20% for the complex vision chip design while reducing the simulation and debugging overheads.

  7. Gordon Research Conference on photosynthesis: photosynthetic plasticity from the environment to synthetic systems.

    Science.gov (United States)

    Gisriel, Christopher; Saroussi, Shai; Ramundo, Silvia; Fromme, Petra; Govindjee

    2018-01-02

    Here, we provide a summary of the 2017 Gordon Research Conference on Photosynthesis: "Photosynthetic plasticity: from the environment to synthetic systems". This conference was held at the Grand Summit Resort Hotel at Sunday River, Newry, Maine, USA, from July 16 to 21, 2017. We have also included here a brief description of the Gordon Research Seminar (for students and post-docs) held during 2 days preceding this conference. Following the conclusion of the conference's scientific program, four young scientists (Han Bao, Vivek Tiwari, Setsuko Wakao, and Usha Lingappa) were recognized for their research presentations, each of whom received a book as a gift from one of us (Govindjee). Having chaired the 2015 Gordon Research Conference on Photosynthesis in 2015, Fabrice Rappaport, who lost his fight against cancer in January 2016, was remembered for his profound impact on the field of photosynthesis research.

  8. Synthetic system mimicking the energy transfer and charge separation of natural photosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Gust, D.; Moore, T.A.

    1985-05-01

    A synthetic molecular triad consisting of a porphyrin P linked to both a quinone Q and a carotenoid polyene C has been prepared as a mimic of natural photosynthesis for solar energy conversion purposes. Laser flash excitation of the porphyrin moiety yields a charge-separated state Csup(+.)-P-Qsup(-.) within 100 ps with a quantum yield of more than 0.25. This charge-separated state has a lifetime on the microsecond time scale in suitable solvents. The triad also models photosynthetic antenna function and photoprotection from singlet oxygen damge. The successful biomimicry of photosynthetic charge separation is in part the result of multistep electron transfers which rapidly separate the charges and leave the system at high potential, but with a considerable barrier to recombination.

  9. Surface modeling method for aircraft engine blades by using speckle patterns based on the virtual stereo vision system

    Science.gov (United States)

    Yu, Zhijing; Ma, Kai; Wang, Zhijun; Wu, Jun; Wang, Tao; Zhuge, Jingchang

    2018-03-01

    A blade is one of the most important components of an aircraft engine. Due to its high manufacturing costs, it is indispensable to come up with methods for repairing damaged blades. In order to obtain a surface model of the blades, this paper proposes a modeling method by using speckle patterns based on the virtual stereo vision system. Firstly, blades are sprayed evenly creating random speckle patterns and point clouds from blade surfaces can be calculated by using speckle patterns based on the virtual stereo vision system. Secondly, boundary points are obtained in the way of varied step lengths according to curvature and are fitted to get a blade surface envelope with a cubic B-spline curve. Finally, the surface model of blades is established with the envelope curves and the point clouds. Experimental results show that the surface model of aircraft engine blades is fair and accurate.

  10. The FlexControl concept - a vision, a concept and a product for the future power system

    DEFF Research Database (Denmark)

    Nørgård, Per Bromand

    2011-01-01

    Control is based on aggregated, indirect and rule based communication and control, and open standards. The indirect control is based on responses to the frequency, the voltage and the broadcasting of global or local price signals. The paper presents an overview of the FlexControl concept, with its elements......FlexControl is a vision, a concept and a product – a vision for the control of future power systems based on renewable energy and distributed control, a generic concept for smart control of many power units and ‘product’ implementations of the concept in different applications. The general...... in order to maintain the power balances and the high security of supply and power quality in all parts of the grid. FlexControl is a flexible, modular, scalable and generic control concept designed for smart control of a huge number of distributed, controllable power units (DERs) in the power system. Flex...

  11. Living with vision loss

    Science.gov (United States)

    Diabetes - vision loss; Retinopathy - vision loss; Low vision; Blindness - vision loss ... Low vision is a visual disability. Wearing regular glasses or contacts does not help. People with low vision have ...

  12. Robot vision

    International Nuclear Information System (INIS)

    Hall, E.L.

    1984-01-01

    Almost all industrial robots use internal sensors such as shaft encoders which measure rotary position, or tachometers which measure velocity, to control their motions. Most controllers also provide interface capabilities so that signals from conveyors, machine tools, and the robot itself may be used to accomplish a task. However, advanced external sensors, such as visual sensors, can provide a much greater degree of adaptability for robot control as well as add automatic inspection capabilities to the industrial robot. Visual and other sensors are now being used in fundamental operations such as material processing with immediate inspection, material handling with adaption, arc welding, and complex assembly tasks. A new industry of robot vision has emerged. The application of these systems is an area of great potential

  13. Light Vision Color

    Science.gov (United States)

    Valberg, Arne

    2005-04-01

    Light Vision Color takes a well-balanced, interdisciplinary approach to our most important sensory system. The book successfully combines basics in vision sciences with recent developments from different areas such as neuroscience, biophysics, sensory psychology and philosophy. Originally published in 1998 this edition has been extensively revised and updated to include new chapters on clinical problems and eye diseases, low vision rehabilitation and the basic molecular biology and genetics of colour vision. Takes a broad interdisciplinary approach combining basics in vision sciences with the most recent developments in the area Includes an extensive list of technical terms and explanations to encourage student understanding Successfully brings together the most important areas of the subject in to one volume

  14. Problems of bentonite rebonding of synthetic system sands in turbine mixers

    Directory of Open Access Journals (Sweden)

    A. Fedoryszyn

    2008-12-01

    Full Text Available Turbine (rotor mixers are widely used in foundries for bentonite rebonding of synthetic system sands. They form basic equipment in modern sand processing plants. Their major advantage is the short time of the rebond mixing cycle.Until now, no complete theoretical description of the process of mixing in turbine mixers has been offered. Neither does it seem reasonable to try to adapt the theoretical backgrounds of the mixing process carried out in mixers of other types, for example, rooler mixers [1], to the description of operation of the turbine mixers. Truly one can risk the statement that the individual fundamental operations of mixing in rooler mixers, like kneading, grinding, mixing and thinning, are also performed in turbine mixers. Yet, even if so, in turbine mixers these processes are proceeding at a rate and intensity different than in the roller mixers. The fact should also be recalled that the theoretical backgrounds usually relate to the preparation of sand mixtures from new components, and this considerably restricts the field of application of these descriptions when referred to rebond mixing of the system sand. The fundamentals of the process of the synthetic sand rebonding with bentonite require determination and description of operations, like disaggregation, even distribution of binder and water within the entire volume of the rebonded sand batch, sand grains coating, binder activation and aeration.This study presents the scope of research on the sand rebonding process carried out in turbine mixers. The aim has been to determine the range and specific values of the designing and operating parameters to get optimum properties of the rebonded sand as well as energy input in the process.

  15. A Neural Network Architecture For Rapid Model Indexing In Computer Vision Systems

    Science.gov (United States)

    Pawlicki, Ted

    1988-03-01

    Models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems. A major consideration in such systems, however, is how stored models are initially accessed and indexed by the system. As the number of stored models increases, the time required to search memory for the correct model becomes high. Parallel distributed, connectionist, neural networks' have been shown to have appealing content addressable memory properties. This paper discusses an architecture for efficient storage and reference of model memories stored as stable patterns of activity in a parallel, distributed, connectionist, neural network. The emergent properties of content addressability and resistance to noise are exploited to perform indexing of the appropriate object centered model from image centered primitives. The system consists of three network modules each of which represent information relative to a different frame of reference. The model memory network is a large state space vector where fields in the vector correspond to ordered component objects and relative, object based spatial relationships between the component objects. The component assertion network represents evidence about the existence of object primitives in the input image. It establishes local frames of reference for object primitives relative to the image based frame of reference. The spatial relationship constraint network is an intermediate representation which enables the association between the object based and the image based frames of reference. This intermediate level represents information about possible object orderings and establishes relative spatial relationships from the image based information in the component assertion network below. It is also constrained by the lawful object orderings in the model memory network above. The system design is consistent with current psychological theories of recognition by component. It also seems to support Marr's notions

  16. Incarnation and the discarnate states: an exposition on the function of the principles in the system of W.B. Yeat's A vision

    OpenAIRE

    2009-01-01

    M.A. The function of the Principles in the system rendered in W. B. Yeats’s A Vision (1937), like most aspects of the system, has received minimal critical and scholarly attention. The reason for this state of affairs is that most Yeats scholars prefer to avoid studying A Vision, for various reasons. The result of this is that little is known of the system. Certain scholars have argued that A Vision is a hoax and an incomprehensible work, which need not be elucidated. The spiritual origin ...

  17. The Impact of Vision Loss Among Survivors of Childhood Central Nervous System Astroglial Tumors

    Science.gov (United States)

    de Blank, Peter MK; Fisher, Michael J; Lu, Lu; Leisenring, Wendy M; Ness, Kirsten K; Sklar, Charles A; Stovall, Marilyn; Vukadinovich, Chris; Robison, Leslie L.; Armstrong, Gregory T.; Krull, Kevin R

    2015-01-01

    Background The impact of impaired vision on cognitive and psychosocial outcomes among long-term survivors of childhood low-grade gliomas has not been investigated previously, but could inform therapeutic decision-making. Methods Data from the Childhood Cancer Survivor Study was used to investigate psychological (measures of cognitive/emotional function) and socioeconomic (education, income, employment, marital status, independent living) outcomes among astroglial tumors survivors grouped by: (a) vision without impairment, (b) vision with impairment including unilateral blindness, visual field deficits or amblyopia, or (c) bilateral blindness. The effect of vision status on outcomes was examined using multivariable logistic regression, adjusting for age, gender, cranial radiation therapy and medical comorbidities. Results Among 1,233 survivors of childhood astroglial tumor ≥ 5 years post-diagnosis, 277 (22.5%) had visual impairment. In multivariable analysis, survivors with bilateral blindness were more likely to be unmarried (adjusted odds ratio [95% confidence interval]: 4.7 [1.5, 15.0]), live with a caregiver (3.1 [1.3, 7.5]), and be unemployed (2.2 [1.1, 4.5]) compared to those without visual impairment. Bilateral blindness had no measureable effect on cognitive or emotional outcomes, and vision with impairment was not significantly associated with any psychological or socio-economic outcomes. Conclusions Adult survivors of childhood astroglial tumors with bilateral blindness are more likely to live unmarried and dependently and be unemployed. Survivors with visual impairment but some remaining vision did not differ significantly with regard to psychological function and socioeconomic status from those without visual impairment. PMID:26755438

  18. Impact of vision loss among survivors of childhood central nervous system astroglial tumors.

    Science.gov (United States)

    de Blank, Peter M K; Fisher, Michael J; Lu, Lu; Leisenring, Wendy M; Ness, Kirsten K; Sklar, Charles A; Stovall, Marilyn; Vukadinovich, Chris; Robison, Leslie L; Armstrong, Gregory T; Krull, Kevin R

    2016-03-01

    The impact of impaired vision on cognitive and psychosocial outcomes among long-term survivors of childhood low-grade gliomas has not been investigated previously but could inform therapeutic decision making. Data from the Childhood Cancer Survivor Study were used to investigate psychological outcomes (measures of cognitive/emotional function) and socioeconomic outcomes (education, income, employment, marital status, and independent living) among astroglial tumor survivors grouped by 1) vision without impairment, 2) vision with impairment (including unilateral blindness, visual field deficits, and amblyopia), or 3) bilateral blindness. The effect of vision status on outcomes was examined with multivariate logistic regression with adjustments for age, sex, cranial radiation therapy, and medical comorbidities. Among 1233 survivors of childhood astroglial tumors 5 or more years after their diagnosis, 277 (22.5%) had visual impairment. In a multivariate analysis, survivors with bilateral blindness were more likely to be unmarried (adjusted odds ratio (OR), 4.7; 95% confidence interval [CI], 1.5-15.0), live with a caregiver (adjusted OR, 3.1; 95% CI, 1.3-7.5), and be unemployed (adjusted OR, 2.2; 95% CI, 1.1-4.5) in comparison with those without visual impairment. Bilateral blindness had no measurable effect on cognitive or emotional outcomes, and vision with impairment was not significantly associated with any psychological or socioeconomic outcomes. Adult survivors of childhood astroglial tumors with bilateral blindness were more likely to live unmarried and dependently and to be unemployed. Survivors with visual impairment but some remaining vision did not differ significantly with respect to psychological function and socioeconomic status from those without visual impairment. Cancer 2016;122:730-739. © 2016 American Cancer Society. © 2016 American Cancer Society.

  19. Development of a Stereo Vision Measurement System for a 3D Three-Axial Pneumatic Parallel Mechanism Robot Arm

    Directory of Open Access Journals (Sweden)

    Chien-Lun Hou

    2011-02-01

    Full Text Available In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm.

  20. Algorithm & SoC design for automotive vision systems for smart safe driving system

    CERN Document Server

    Shin, Hyunchul

    2014-01-01

    An emerging trend in the automobile industry is its convergence with information technology (IT). Indeed, it has been estimated that almost 90% of new automobile technologies involve IT in some form. Smart driving technologies that improve safety as well as green fuel technologies are quite representative of the convergence between IT and automobiles. The smart driving technologies include three key elements: sensing of driving environments, detection of objects and potential hazards, and the generation of driving control signals including warning signals. Although radar-based systems are primarily used for sensing the driving environments, the camera has gained importance in advanced driver assistance systems(ADAS). This book covers system-on-a-chip (SoC) designs—including both algorithms and hardware—related with image sensing and object detection by using the camera for smart driving systems. It introduces a variety of algorithms such as lens correction, super resolution, image enhancement, and object ...