WorldWideScience

Sample records for computer image recognition

  1. Computer image processing and recognition

    Science.gov (United States)

    Hall, E. L.

    1979-01-01

    A systematic introduction to the concepts and techniques of computer image processing and recognition is presented. Consideration is given to such topics as image formation and perception; computer representation of images; image enhancement and restoration; reconstruction from projections; digital television, encoding, and data compression; scene understanding; scene matching and recognition; and processing techniques for linear systems.

  2. Applications of evolutionary computation in image processing and pattern recognition

    CERN Document Server

    Cuevas, Erik; Perez-Cisneros, Marco

    2016-01-01

    This book presents the use of efficient Evolutionary Computation (EC) algorithms for solving diverse real-world image processing and pattern recognition problems. It provides an overview of the different aspects of evolutionary methods in order to enable the reader in reaching a global understanding of the field and, in conducting studies on specific evolutionary techniques that are related to applications in image processing and pattern recognition. It explains the basic ideas of the proposed applications in a way that can also be understood by readers outside of the field. Image processing and pattern recognition practitioners who are not evolutionary computation researchers will appreciate the discussed techniques beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise on such areas. On the other hand, members of the evolutionary computation community can learn the way in which image processing and pattern recognition problems can be translated into an...

  3. Intracerebral hemorrhage auto recognition in computed tomography images

    International Nuclear Information System (INIS)

    Choi, Seok Yoon; Kang, Se Sik; Kim, Chang Soo; Kim, Jung Hoon; Kim, Dong Hyun; Ye, Soo Young; Ko, Seong Jin

    2013-01-01

    The CT examination sometimes fail to localize the cerebral hemorrhage part depending on the seriousness and may embarrass the pathologist if he/she is not trained enough for emergencies. Therefore, an assisting role is necessary for examination, automatic and quick detection of the cerebral hemorrhage part, and supply of the quantitative information in emergencies. the computer based automatic detection and recognition system may be of a great service to the bleeding part detection. As a result of this research, we succeeded not only in automatic detection of the cerebral hemorrhage part by grafting threshold value handling, morphological operation, and roundness calculation onto the bleeding part but also in development of the PCA based classifier to screen any wrong choice in the detection candidate group. We think if we apply the new developed system to the cerebral hemorrhage patient in his critical condition, it will be very valuable data to the medical team for operation planning

  4. Biologically motivated computationally intensive approaches to image pattern recognition

    NARCIS (Netherlands)

    Petkov, Nikolay

    This paper presents some of the research activities of the research group in vision as a grand challenge problem whose solution is estimated to need the power of Tflop/s computers and for which computational methods have yet to be developed. The concerned approaches are biologically motivated, in

  5. Algebraic Geometry and Computational Algebraic Geometry for Image Database Indexing, Image Recognition, And Computer Vision

    National Research Council Canada - National Science Library

    Stiller, Peter

    1999-01-01

    .... The theory yields a feature dependent system of equations in variables which represent the 3D invariants of certain features on an object and the 2D invariants of those same features in an image. These equations...

  6. Workshop on Standards for Image Pattern Recognition. Computer Seience & Technology Series.

    Science.gov (United States)

    Evans, John M. , Ed.; And Others

    Automatic image pattern recognition techniques have been successfully applied to improving productivity and quality in both manufacturing and service applications. Automatic Image Pattern Recognition Algorithms are often developed and tested using unique data bases for each specific application. Quantitative comparison of different approaches and…

  7. Investigation into diagnostic agreement using automated computer-assisted histopathology pattern recognition image analysis

    Directory of Open Access Journals (Sweden)

    Joshua D Webster

    2012-01-01

    Full Text Available The extent to which histopathology pattern recognition image analysis (PRIA agrees with microscopic assessment has not been established. Thus, a commercial PRIA platform was evaluated in two applications using whole-slide images. Substantial agreement, lacking significant constant or proportional errors, between PRIA and manual morphometric image segmentation was obtained for pulmonary metastatic cancer areas (Passing/Bablok regression. Bland-Altman analysis indicated heteroscedastic measurements and tendency toward increasing variance with increasing tumor burden, but no significant trend in mean bias. The average between-methods percent tumor content difference was -0.64. Analysis of between-methods measurement differences relative to the percent tumor magnitude revealed that method disagreement had an impact primarily in the smallest measurements (tumor burden 0.988, indicating high reproducibility for both methods, yet PRIA reproducibility was superior (C.V.: PRIA = 7.4, manual = 17.1. Evaluation of PRIA on morphologically complex teratomas led to diagnostic agreement with pathologist assessments of pluripotency on subsets of teratomas. Accommodation of the diversity of teratoma histologic features frequently resulted in detrimental trade-offs, increasing PRIA error elsewhere in images. PRIA error was nonrandom and influenced by variations in histomorphology. File-size limitations encountered while training algorithms and consequences of spectral image processing dominance contributed to diagnostic inaccuracies experienced for some teratomas. PRIA appeared better suited for tissues with limited phenotypic diversity. Technical improvements may enhance diagnostic agreement, and consistent pathologist input will benefit further development and application of PRIA.

  8. Computationally efficient SVM multi-class image recognition with confidence measures

    International Nuclear Information System (INIS)

    Makili, Lazaro; Vega, Jesus; Dormido-Canto, Sebastian; Pastor, Ignacio; Murari, Andrea

    2011-01-01

    Typically, machine learning methods produce non-qualified estimates, i.e. the accuracy and reliability of the predictions are not provided. Transductive predictors are very recent classifiers able to provide, simultaneously with the prediction, a couple of values (confidence and credibility) to reflect the quality of the prediction. Usually, a drawback of the transductive techniques for huge datasets and large dimensionality is the high computational time. To overcome this issue, a more efficient classifier has been used in a multi-class image classification problem in the TJ-II stellarator database. It is based on the creation of a hash function to generate several 'one versus the rest' classifiers for every class. By using Support Vector Machines as the underlying classifier, a comparison between the pure transductive approach and the new method has been performed. In both cases, the success rates are high and the computation time with the new method is up to 0.4 times the old one.

  9. Computationally efficient SVM multi-class image recognition with confidence measures

    Energy Technology Data Exchange (ETDEWEB)

    Makili, Lazaro [Dpto. Informatica y Automatica - UNED, Madrid (Spain); Vega, Jesus, E-mail: jesus.vega@ciemat.es [Asociacion EURATOM/CIEMAT para Fusion, Madrid (Spain); Dormido-Canto, Sebastian [Dpto. Informatica y Automatica - UNED, Madrid (Spain); Pastor, Ignacio [Asociacion EURATOM/CIEMAT para Fusion, Madrid (Spain); Murari, Andrea [Associazione EURATOM-CIEMAT per la Fusione, Consorzio RFX, Padova (Italy)

    2011-10-15

    Typically, machine learning methods produce non-qualified estimates, i.e. the accuracy and reliability of the predictions are not provided. Transductive predictors are very recent classifiers able to provide, simultaneously with the prediction, a couple of values (confidence and credibility) to reflect the quality of the prediction. Usually, a drawback of the transductive techniques for huge datasets and large dimensionality is the high computational time. To overcome this issue, a more efficient classifier has been used in a multi-class image classification problem in the TJ-II stellarator database. It is based on the creation of a hash function to generate several 'one versus the rest' classifiers for every class. By using Support Vector Machines as the underlying classifier, a comparison between the pure transductive approach and the new method has been performed. In both cases, the success rates are high and the computation time with the new method is up to 0.4 times the old one.

  10. Iris recognition via plenoptic imaging

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J.; Boehnen, Chris Bensing; Bolme, David S.

    2017-11-07

    Iris recognition can be accomplished for a wide variety of eye images by using plenoptic imaging. Using plenoptic technology, it is possible to correct focus after image acquisition. One example technology reconstructs images having different focus depths and stitches them together, resulting in a fully focused image, even in an off-angle gaze scenario. Another example technology determines three-dimensional data for an eye and incorporates it into an eye model used for iris recognition processing. Another example technology detects contact lenses. Application of the technologies can result in improved iris recognition under a wide variety of scenarios.

  11. Modelling the influence of noise of the image sensor for blood cells recognition in computer microscopy

    Science.gov (United States)

    Nikitaev, V. G.; Nagornov, O. V.; Pronichev, A. N.; Polyakov, E. V.; Dmitrieva, V. V.

    2017-12-01

    The first stage of diagnostics of blood cancer is the analysis of blood smears. The application of decision-making support systems would reduce the subjectivity of the diagnostic process and avoid errors, resulting in often irreversible changes in the patient's condition. In this regard, the solution of this problem requires the use of modern technology. One of the tools of the program classification of blood cells are texture features, and the task of finding informative among them is promising. The paper investigates the effect of noise of the image sensor to informative texture features with application of methods of mathematical modelling.

  12. Advances in image processing and pattern recognition

    International Nuclear Information System (INIS)

    Cappellini, V.

    1986-01-01

    The conference papers reported provide an authorative and permanent record of the contributions. Some papers are more theoretical or of review nature, while others contain new implementations and applications. They are conveniently grouped into the following 7 fields (after a general overview): Acquisition and Presentation of 2-D and 3-D Images; Static and Dynamic Image Processing; Determination of Object's Position and Orientation; Objects and Characters Recognition; Semantic Models and Image Understanding; Robotics and Computer Vision in Manufacturing; Specialized Processing Techniques and Structures. In particular, new digital image processing and recognition methods, implementation architectures and special advanced applications (industrial automation, robotics, remote sensing, biomedicine, etc.) are presented. (Auth.)

  13. Data structures, computer graphics, and pattern recognition

    CERN Document Server

    Klinger, A; Kunii, T L

    1977-01-01

    Data Structures, Computer Graphics, and Pattern Recognition focuses on the computer graphics and pattern recognition applications of data structures methodology.This book presents design related principles and research aspects of the computer graphics, system design, data management, and pattern recognition tasks. The topics include the data structure design, concise structuring of geometric data for computer aided design, and data structures for pattern recognition algorithms. The survey of data structures for computer graphics systems, application of relational data structures in computer gr

  14. A Statistical Approach to Retrieving Historical Manuscript Images without Recognition

    National Research Council Canada - National Science Library

    Rath, Toni M; Lavrenko, Victor; Manmatha, R

    2003-01-01

    ...), and word spotting -- an image matching approach (computationally expensive). In this work, the authors present a novel retrieval approach for historical document collections that does not require recognition...

  15. An automatic image recognition approach

    Directory of Open Access Journals (Sweden)

    Tudor Barbu

    2007-07-01

    Full Text Available Our paper focuses on the graphical analysis domain. We propose an automatic image recognition technique. This approach consists of two main pattern recognition steps. First, it performs an image feature extraction operation on an input image set, using statistical dispersion features. Then, an unsupervised classification process is performed on the previously obtained graphical feature vectors. An automatic region-growing based clustering procedure is proposed and utilized in the classification stage.

  16. Human ear recognition by computer

    CERN Document Server

    Bhanu, Bir; Chen, Hui

    2010-01-01

    Biometrics deals with recognition of individuals based on their physiological or behavioral characteristics. The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. Unlike the fingerprint and iris, it can be easily captured from a distance without a fully cooperative subject, although sometimes it may be hidden with hair, scarf and jewellery. Also, unlike a face, the ear is a relatively stable structure that does not change much with the age and facial expressions. ""Human Ear Recognition by Computer"" is the first book o

  17. Challenging ocular image recognition

    Science.gov (United States)

    Pauca, V. Paúl; Forkin, Michael; Xu, Xiao; Plemmons, Robert; Ross, Arun A.

    2011-06-01

    Ocular recognition is a new area of biometric investigation targeted at overcoming the limitations of iris recognition performance in the presence of non-ideal data. There are several advantages for increasing the area beyond the iris, yet there are also key issues that must be addressed such as size of the ocular region, factors affecting performance, and appropriate corpora to study these factors in isolation. In this paper, we explore and identify some of these issues with the goal of better defining parameters for ocular recognition. An empirical study is performed where iris recognition methods are contrasted with texture and point operators on existing iris and face datasets. The experimental results show a dramatic recognition performance gain when additional features are considered in the presence of poor quality iris data, offering strong evidence for extending interest beyond the iris. The experiments also highlight the need for the direct collection of additional ocular imagery.

  18. Invariant Face recognition Using Infrared Images

    International Nuclear Information System (INIS)

    Zahran, E.G.

    2012-01-01

    Over the past few decades, face recognition has become a rapidly growing research topic due to the increasing demands in many applications of our daily life such as airport surveillance, personal identification in law enforcement, surveillance systems, information safety, securing financial transactions, and computer security. The objective of this thesis is to develop a face recognition system capable of recognizing persons with a high recognition capability, low processing time, and under different illumination conditions, and different facial expressions. The thesis presents a study for the performance of the face recognition system using two techniques; the Principal Component Analysis (PCA), and the Zernike Moments (ZM). The performance of the recognition system is evaluated according to several aspects including the recognition rate, and the processing time. Face recognition systems that use visual images are sensitive to variations in the lighting conditions and facial expressions. The performance of these systems may be degraded under poor illumination conditions or for subjects of various skin colors. Several solutions have been proposed to overcome these limitations. One of these solutions is to work in the Infrared (IR) spectrum. IR images have been suggested as an alternative source of information for detection and recognition of faces, when there is little or no control over lighting conditions. This arises from the fact that these images are formed due to thermal emissions from skin, which is an intrinsic property because these emissions depend on the distribution of blood vessels under the skin. On the other hand IR face recognition systems still have limitations with temperature variations and recognition of persons wearing eye glasses. In this thesis we will fuse IR images with visible images to enhance the performance of face recognition systems. Images are fused using the wavelet transform. Simulation results show that the fusion of visible and

  19. Indoor navigation by image recognition

    Science.gov (United States)

    Choi, Io Teng; Leong, Chi Chong; Hong, Ka Wo; Pun, Chi-Man

    2017-07-01

    With the progress of smartphones hardware, it is simple on smartphone using image recognition technique such as face detection. In addition, indoor navigation system development is much slower than outdoor navigation system. Hence, this research proves a usage of image recognition technique for navigation in indoor environment. In this paper, we introduced an indoor navigation application that uses the indoor environment features to locate user's location and a route calculating algorithm to generate an appropriate path for user. The application is implemented on Android smartphone rather than iPhone. Yet, the application design can also be applied on iOS because the design is implemented without using special features only for Android. We found that digital navigation system provides better and clearer location information than paper map. Also, the indoor environment is ideal for Image recognition processing. Hence, the results motivate us to design an indoor navigation system using image recognition.

  20. Privacy-preserving architecture for forensic image recognition

    NARCIS (Netherlands)

    Peter, Andreas; Hartman, T.; Muller, S.; Katzenbeisser, S.

    2013-01-01

    Forensic image recognition is an important tool in many areas of law enforcement where an agency wants to prosecute possessors of illegal images. The recognition of illegal images that might have undergone human imperceptible changes (e.g., a JPEG-recompression) is commonly done by computing a

  1. Recognition of Images Degraded by Gaussian Blur

    Czech Academy of Sciences Publication Activity Database

    Flusser, Jan; Farokhi, Sajad; Höschl, Cyril; Suk, Tomáš; Zitová, Barbara; Pedone, M.

    2016-01-01

    Roč. 25, č. 2 (2016), s. 790-806 ISSN 1057-7149 R&D Projects: GA ČR(CZ) GA15-16928S Institutional support: RVO:67985556 Keywords : Blurred image * object recognition * blur invariant comparison * Gaussian blur * projection operators * image moments * moment invariants Subject RIV: JD - Computer Applications, Robotics Impact factor: 4.828, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/flusser-0454335.pdf

  2. Image pattern recognition supporting interactive analysis and graphical visualization

    Science.gov (United States)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  3. Image processing and recognition for biological images.

    Science.gov (United States)

    Uchida, Seiichi

    2013-05-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. © 2013 The Author Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.

  4. Fingerprint recognition using image processing

    Science.gov (United States)

    Dholay, Surekha; Mishra, Akassh A.

    2011-06-01

    Finger Print Recognition is concerned with the difficult task of matching the images of finger print of a person with the finger print present in the database efficiently. Finger print Recognition is used in forensic science which helps in finding the criminals and also used in authentication of a particular person. Since, Finger print is the only thing which is unique among the people and changes from person to person. The present paper describes finger print recognition methods using various edge detection techniques and also how to detect correct finger print using a camera images. The present paper describes the method that does not require a special device but a simple camera can be used for its processes. Hence, the describe technique can also be using in a simple camera mobile phone. The various factors affecting the process will be poor illumination, noise disturbance, viewpoint-dependence, Climate factors, and Imaging conditions. The described factor has to be considered so we have to perform various image enhancement techniques so as to increase the quality and remove noise disturbance of image. The present paper describe the technique of using contour tracking on the finger print image then using edge detection on the contour and after that matching the edges inside the contour.

  5. Fine-grained recognition of plants from images

    OpenAIRE

    Šulc, Milan; Matas, Jiří

    2017-01-01

    Background Fine-grained recognition of plants from images is a challenging computer vision task, due to the diverse appearance and complex structure of plants, high intra-class variability and small inter-class differences. We review the state-of-the-art and discuss plant recognition tasks, from identification of plants from specific plant organs to general plant recognition “in the wild”. Results We propose texture analysis and deep learning methods for different plant recognition tasks. The...

  6. High-speed all-optical pattern recognition of dispersive Fourier images through a photonic reservoir computing subsystem.

    Science.gov (United States)

    Mesaritakis, Charis; Bogris, Adonis; Kapsalis, Alexandros; Syvridis, Dimitris

    2015-07-15

    In this Letter, we present and fully model a photonic scheme that allows the high-speed identification of images acquired through the dispersive Fourier technique. The proposed setup consists of a photonic reservoir-computing scheme that is based on the nonlinear response of randomly interconnected InGaAsP microring resonators. This approach allowed classification errors of 0.6%, whereas it alleviates the need for complex high-cost optoelectronic sampling and digital processing.

  7. Optical character recognition systems for different languages with soft computing

    CERN Document Server

    Chaudhuri, Arindam; Badelia, Pratixa; K Ghosh, Soumya

    2017-01-01

    The book offers a comprehensive survey of soft-computing models for optical character recognition systems. The various techniques, including fuzzy and rough sets, artificial neural networks and genetic algorithms, are tested using real texts written in different languages, such as English, French, German, Latin, Hindi and Gujrati, which have been extracted by publicly available datasets. The simulation studies, which are reported in details here, show that soft-computing based modeling of OCR systems performs consistently better than traditional models. Mainly intended as state-of-the-art survey for postgraduates and researchers in pattern recognition, optical character recognition and soft computing, this book will be useful for professionals in computer vision and image processing alike, dealing with different issues related to optical character recognition.

  8. A method of object recognition for single pixel imaging

    Science.gov (United States)

    Li, Boxuan; Zhang, Wenwen

    2018-01-01

    Computational ghost imaging(CGI), utilizing a single-pixel detector, has been extensively used in many fields. However, in order to achieve a high-quality reconstructed image, a large number of iterations are needed, which limits the flexibility of using CGI in practical situations, especially in the field of object recognition. In this paper, we purpose a method utilizing the feature matching to identify the number objects. In the given system, approximately 90% of accuracy of recognition rates can be achieved, which provides a new idea for the application of single pixel imaging in the field of object recognition

  9. Biometric Image Recognition Based on Optical Correlator

    Directory of Open Access Journals (Sweden)

    David Solus

    2017-01-01

    Full Text Available The aim of this paper is to design a biometric images recognition system able to recognize biometric images-eye and DNA marker. The input scenes are processed by user-friendly software created in C# programming language and then are compared with reference images stored in database. In this system, Cambridge optical correlator is used as an image comparator based on similarity of images in the recognition phase.

  10. 9th International Conference on Computer Recognition Systems

    CERN Document Server

    Jackowski, Konrad; Kurzyński, Marek; Woźniak, Michał; Żołnierek, Andrzej

    2016-01-01

    The computer recognition systems are nowadays one of the most promising directions in artificial intelligence. This book is the most comprehensive study of this field. It contains a collection of 79 carefully selected articles contributed by experts of pattern recognition. It reports on current research with respect to both methodology and applications. In particular, it includes the following sections: Features, learning, and classifiers Biometrics Data Stream Classification and Big Data Analytics Image processing and computer vision Medical applications Applications RGB-D perception: recent developments and applications This book is a great reference tool for scientists who deal with the problems of designing computer pattern recognition systems. Its target readers can be the as well researchers as students of computer science, artificial intelligence or robotics.  .

  11. Fast and accurate face recognition based on image compression

    Science.gov (United States)

    Zheng, Yufeng; Blasch, Erik

    2017-05-01

    Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.

  12. Human Expertise Helps Computer Classify Images

    Science.gov (United States)

    Rorvig, Mark E.

    1991-01-01

    Two-domain method of computational classification of images requires less computation than other methods for computational recognition, matching, or classification of images or patterns. Does not require explicit computational matching of features, and incorporates human expertise without requiring translation of mental processes of classification into language comprehensible to computer. Conceived to "train" computer to analyze photomicrographs of microscope-slide specimens of leucocytes from human peripheral blood to distinguish between specimens from healthy and specimens from traumatized patients.

  13. Textual emotion recognition for enhancing enterprise computing

    Science.gov (United States)

    Quan, Changqin; Ren, Fuji

    2016-05-01

    The growing interest in affective computing (AC) brings a lot of valuable research topics that can meet different application demands in enterprise systems. The present study explores a sub area of AC techniques - textual emotion recognition for enhancing enterprise computing. Multi-label emotion recognition in text is able to provide a more comprehensive understanding of emotions than single label emotion recognition. A representation of 'emotion state in text' is proposed to encompass the multidimensional emotions in text. It ensures the description in a formal way of the configurations of basic emotions as well as of the relations between them. Our method allows recognition of the emotions for the words bear indirect emotions, emotion ambiguity and multiple emotions. We further investigate the effect of word order for emotional expression by comparing the performances of bag-of-words model and sequence model for multi-label sentence emotion recognition. The experiments show that the classification results under sequence model are better than under bag-of-words model. And homogeneous Markov model showed promising results of multi-label sentence emotion recognition. This emotion recognition system is able to provide a convenient way to acquire valuable emotion information and to improve enterprise competitive ability in many aspects.

  14. Atoms of recognition in human and computer vision.

    Science.gov (United States)

    Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel

    2016-03-08

    Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.

  15. Object recognition of real targets using modelled SAR images

    Science.gov (United States)

    Zherdev, D. A.

    2017-12-01

    In this work the problem of recognition is studied using SAR images. The algorithm of recognition is based on the computation of conjugation indices with vectors of class. The support subspaces for each class are constructed by exception of the most and the less correlated vectors in a class. In the study we examine the ability of a significant feature vector size reduce that leads to recognition time decrease. The images of targets form the feature vectors that are transformed using pre-trained convolutional neural network (CNN).

  16. Automatic speech recognition for report generation in computed tomography

    International Nuclear Information System (INIS)

    Teichgraeber, U.K.M.; Ehrenstein, T.; Lemke, M.; Liebig, T.; Stobbe, H.; Hosten, N.; Keske, U.; Felix, R.

    1999-01-01

    Purpose: A study was performed to compare the performance of automatic speech recognition (ASR) with conventional transcription. Materials and Methods: 100 CT reports were generated by using ASR and 100 CT reports were dictated and written by medical transcriptionists. The time for dictation and correction of errors by the radiologist was assessed and the type of mistakes was analysed. The text recognition rate was calculated in both groups and the average time between completion of the imaging study by the technologist and generation of the written report was assessed. A commercially available speech recognition technology (ASKA Software, IBM Via Voice) running of a personal computer was used. Results: The time for the dictation using digital voice recognition was 9.4±2.3 min compared to 4.5±3.6 min with an ordinary Dictaphone. The text recognition rate was 97% with digital voice recognition and 99% with medical transcriptionists. The average time from imaging completion to written report finalisation was reduced from 47.3 hours with medical transcriptionists to 12.7 hours with ASR. The analysis of misspellings demonstrated (ASR vs. medical transcriptionists): 3 vs. 4 for syntax errors, 0 vs. 37 orthographic mistakes, 16 vs. 22 mistakes in substance and 47 vs. erroneously applied terms. Conclusions: The use of digital voice recognition as a replacement for medical transcription is recommendable when an immediate availability of written reports is necessary. (orig.) [de

  17. Investigation of Carbohydrate Recognition via Computer Simulation

    Directory of Open Access Journals (Sweden)

    Quentin R. Johnson

    2015-04-01

    Full Text Available Carbohydrate recognition by proteins, such as lectins and other (biomolecules, can be essential for many biological functions. Recently, interest has arisen due to potential protein and drug design and future bioengineering applications. A quantitative measurement of carbohydrate-protein interaction is thus important for the full characterization of sugar recognition. We focus on the aspect of utilizing computer simulations and biophysical models to evaluate the strength and specificity of carbohydrate recognition in this review. With increasing computational resources, better algorithms and refined modeling parameters, using state-of-the-art supercomputers to calculate the strength of the interaction between molecules has become increasingly mainstream. We review the current state of this technique and its successful applications for studying protein-sugar interactions in recent years.

  18. Extending the imaging volume for biometric iris recognition.

    Science.gov (United States)

    Narayanswamy, Ramkumar; Johnson, Gregory E; Silveira, Paulo E X; Wach, Hans B

    2005-02-10

    The use of the human iris as a biometric has recently attracted significant interest in the area of security applications. The need to capture an iris without active user cooperation places demands on the optical system. Unlike a traditional optical design, in which a large imaging volume is traded off for diminished imaging resolution and capacity for collecting light, Wavefront Coded imaging is a computational imaging technology capable of expanding the imaging volume while maintaining an accurate and robust iris identification capability. We apply Wavefront Coded imaging to extend the imaging volume of the iris recognition application.

  19. Computational Intelligence in Image Processing

    CERN Document Server

    Siarry, Patrick

    2013-01-01

    Computational intelligence based techniques have firmly established themselves as viable, alternate, mathematical tools for more than a decade. They have been extensively employed in many systems and application domains, among these signal processing, automatic control, industrial and consumer electronics, robotics, finance, manufacturing systems, electric power systems, and power electronics. Image processing is also an extremely potent area which has attracted the atten­tion of many researchers who are interested in the development of new computational intelligence-based techniques and their suitable applications, in both research prob­lems and in real-world problems. Part I of the book discusses several image preprocessing algorithms; Part II broadly covers image compression algorithms; Part III demonstrates how computational intelligence-based techniques can be effectively utilized for image analysis purposes; and Part IV shows how pattern recognition, classification and clustering-based techniques can ...

  20. Fine-grained recognition of plants from images.

    Science.gov (United States)

    Šulc, Milan; Matas, Jiří

    2017-01-01

    Fine-grained recognition of plants from images is a challenging computer vision task, due to the diverse appearance and complex structure of plants, high intra-class variability and small inter-class differences. We review the state-of-the-art and discuss plant recognition tasks, from identification of plants from specific plant organs to general plant recognition "in the wild". We propose texture analysis and deep learning methods for different plant recognition tasks. The methods are evaluated and compared them to the state-of-the-art. Texture analysis is only applied to images with unambiguous segmentation (bark and leaf recognition), whereas CNNs are only applied when sufficiently large datasets are available. The results provide an insight in the complexity of different plant recognition tasks. The proposed methods outperform the state-of-the-art in leaf and bark classification and achieve very competitive results in plant recognition "in the wild". The results suggest that recognition of segmented leaves is practically a solved problem, when high volumes of training data are available. The generality and higher capacity of state-of-the-art CNNs makes them suitable for plant recognition "in the wild" where the views on plant organs or plants vary significantly and the difficulty is increased by occlusions and background clutter.

  1. Image-based automatic recognition of larvae

    Science.gov (United States)

    Sang, Ru; Yu, Guiying; Fan, Weijun; Guo, Tiantai

    2010-08-01

    As the main objects, imagoes have been researched in quarantine pest recognition in these days. However, pests in their larval stage are latent, and the larvae spread abroad much easily with the circulation of agricultural and forest products. It is presented in this paper that, as the new research objects, larvae are recognized by means of machine vision, image processing and pattern recognition. More visional information is reserved and the recognition rate is improved as color image segmentation is applied to images of larvae. Along with the characteristics of affine invariance, perspective invariance and brightness invariance, scale invariant feature transform (SIFT) is adopted for the feature extraction. The neural network algorithm is utilized for pattern recognition, and the automatic identification of larvae images is successfully achieved with satisfactory results.

  2. SOFIR: Securely Outsourced Forensic Image Recognition

    NARCIS (Netherlands)

    Bösch, C.T.; Peter, Andreas; Hartel, Pieter H.; Jonker, Willem

    Forensic image recognition tools are used by law enforcement agencies all over the world to automatically detect illegal images on confiscated equipment. This detection is commonly done with the help of a strictly confidential database consisting of hash values of known illegal images. To detect and

  3. Pattern recognition software and techniques for biological image analysis.

    Directory of Open Access Journals (Sweden)

    Lior Shamir

    2010-11-01

    Full Text Available The increasing prevalence of automated image acquisition systems is enabling new types of microscopy experiments that generate large image datasets. However, there is a perceived lack of robust image analysis systems required to process these diverse datasets. Most automated image analysis systems are tailored for specific types of microscopy, contrast methods, probes, and even cell types. This imposes significant constraints on experimental design, limiting their application to the narrow set of imaging methods for which they were designed. One of the approaches to address these limitations is pattern recognition, which was originally developed for remote sensing, and is increasingly being applied to the biology domain. This approach relies on training a computer to recognize patterns in images rather than developing algorithms or tuning parameters for specific image processing tasks. The generality of this approach promises to enable data mining in extensive image repositories, and provide objective and quantitative imaging assays for routine use. Here, we provide a brief overview of the technologies behind pattern recognition and its use in computer vision for biological and biomedical imaging. We list available software tools that can be used by biologists and suggest practical experimental considerations to make the best use of pattern recognition techniques for imaging assays.

  4. Image Recognition Using Modified Zernike Moments

    Directory of Open Access Journals (Sweden)

    Min HUANG

    2014-03-01

    Full Text Available Zernike moments are complex moments with the orthogonal Zernike polynomials as kernel function, compared with other moments; Zernike moments have greater advantages in image rotation and low noise sensitivity. Because of the Zernike moments have image rotation invariance, and can construct arbitrary high order moments, it can be used for target recognition. In this paper, the Zernike moment algorithm is improved, which makes it having scale invariance in the processing of digital image. At last, an application of the improved Zernike moments in image recognition is given.

  5. Learned image representations for visual recognition

    DEFF Research Database (Denmark)

    Larsen, Anders Boesen Lindbo

    This thesis addresses the problem of extracting image structures for representing images effectively in order to solve visual recognition tasks. Problems from diverse research areas (medical imaging, material science and food processing) have motivated large parts of the methodological development...... the ability to learn high-level concepts in images of faces. The thesis argues in favor of learning features and presents new methods for domains with limited amounts of labeled data allowing feature learning to be applied more broadly....

  6. Iris Recognition: The Consequences of Image Compression

    Directory of Open Access Journals (Sweden)

    Bishop DanielA

    2010-01-01

    Full Text Available Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  7. Material Recognition for Content Based Image Retrieval

    NARCIS (Netherlands)

    Geusebroek, J.M.

    2002-01-01

    One of the open problems in content-based Image Retrieval is the recognition of material present in an image. Knowledge about the set of materials present gives important semantic information about the scene under consideration. For example, detecting sand, sky, and water certainly classifies the

  8. Learning Hierarchical Feature Extractors for Image Recognition

    Science.gov (United States)

    2012-09-01

    recognition, but the analysis applies to all tasks which incorporate some form 48 of pooling (e.g., text processing from which the bag-of-features method ...performance rely on solving an `1-regularized optimization. Several efficient algorithms have been devised for this problem. Homotopy methods such as the...recent advances in image recognition. First, we recast many methods into a common unsupervised feature extraction framework based on an alternation of

  9. Pattern recognition with "materials that compute".

    Science.gov (United States)

    Fang, Yan; Yashin, Victor V; Levitan, Steven P; Balazs, Anna C

    2016-09-01

    Driven by advances in materials and computer science, researchers are attempting to design systems where the computer and material are one and the same entity. Using theoretical and computational modeling, we design a hybrid material system that can autonomously transduce chemical, mechanical, and electrical energy to perform a computational task in a self-organized manner, without the need for external electrical power sources. Each unit in this system integrates a self-oscillating gel, which undergoes the Belousov-Zhabotinsky (BZ) reaction, with an overlaying piezoelectric (PZ) cantilever. The chemomechanical oscillations of the BZ gels deflect the PZ layer, which consequently generates a voltage across the material. When these BZ-PZ units are connected in series by electrical wires, the oscillations of these units become synchronized across the network, where the mode of synchronization depends on the polarity of the PZ. We show that the network of coupled, synchronizing BZ-PZ oscillators can perform pattern recognition. The "stored" patterns are set of polarities of the individual BZ-PZ units, and the "input" patterns are coded through the initial phase of the oscillations imposed on these units. The results of the modeling show that the input pattern closest to the stored pattern exhibits the fastest convergence time to stable synchronization behavior. In this way, networks of coupled BZ-PZ oscillators achieve pattern recognition. Further, we show that the convergence time to stable synchronization provides a robust measure of the degree of match between the input and stored patterns. Through these studies, we establish experimentally realizable design rules for creating "materials that compute."

  10. Investigation of efficient features for image recognition by neural networks.

    Science.gov (United States)

    Goltsev, Alexander; Gritsenko, Vladimir

    2012-04-01

    In the paper, effective and simple features for image recognition (named LiRA-features) are investigated in the task of handwritten digit recognition. Two neural network classifiers are considered-a modified 3-layer perceptron LiRA and a modular assembly neural network. A method of feature selection is proposed that analyses connection weights formed in the preliminary learning process of a neural network classifier. In the experiments using the MNIST database of handwritten digits, the feature selection procedure allows reduction of feature number (from 60 000 to 7000) preserving comparable recognition capability while accelerating computations. Experimental comparison between the LiRA perceptron and the modular assembly neural network is accomplished, which shows that recognition capability of the modular assembly neural network is somewhat better. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Local Pyramidal Descriptors for Image Recognition.

    Science.gov (United States)

    Seidenari, Lorenzo; Serra, Giuseppe; Bagdanov, Andrew D; Del Bimbo, Alberto

    2014-05-01

    In this paper, we present a novel method to improve the flexibility of descriptor matching for image recognition by using local multiresolution pyramids in feature space. We propose that image patches be represented at multiple levels of descriptor detail and that these levels be defined in terms of local spatial pooling resolution. Preserving multiple levels of detail in local descriptors is a way of hedging one's bets on which levels will most relevant for matching during learning and recognition. We introduce the Pyramid SIFT (P-SIFT) descriptor and show that its use in four state-of-the-art image recognition pipelines improves accuracy and yields state-of-the-art results. Our technique is applicable independently of spatial pyramid matching and we show that spatial pyramids can be combined with local pyramids to obtain further improvement. We achieve state-of-the-art results on Caltech-101 (80.1%) and Caltech-256 (52.6%) when compared to other approaches based on SIFT features over intensity images. Our technique is efficient and is extremely easy to integrate into image recognition pipelines.

  12. Handwritten Digits Recognition Using Neural Computing

    Directory of Open Access Journals (Sweden)

    Călin Enăchescu

    2009-12-01

    Full Text Available In this paper we present a method for the recognition of handwritten digits and a practical implementation of this method for real-time recognition. A theoretical framework for the neural networks used to classify the handwritten digits is also presented.The classification task is performed using a Convolutional Neural Network (CNN. CNN is a special type of multy-layer neural network, being trained with an optimized version of the back-propagation learning algorithm.CNN is designed to recognize visual patterns directly from pixel images with minimal preprocessing, being capable to recognize patterns with extreme variability (such as handwritten characters, and with robustness to distortions and simple geometric transformations.The main contributions of this paper are related to theoriginal methods for increasing the efficiency of the learning algorithm by preprocessing the images before the learning process and a method for increasing the precision and performance for real-time applications, by removing the non useful information from the background.By combining these strategies we have obtained an accuracy of 96.76%, using as training set the NIST (National Institute of Standards and Technology database.

  13. Model attraction in medical image object recognition

    Science.gov (United States)

    Tascini, Guido; Zingaretti, Primo

    1995-04-01

    This paper presents as new approach to image recognition based on a general attraction principle. A cognitive recognition is governed by a 'focus on attention' process that concentrates on the visual data subset of task- relevant type only. Our model-based approach combines it with another process, focus on attraction, which concentrates on the transformations of visual data having relevance for the matching. The recognition process is characterized by an intentional evolution of the visual data. This chain of image transformations is viewed as driven by an attraction field that attempts to reduce the distance between the image-point and the model-point in the feature space. The field sources are determined during a learning phase, by supplying the system with a training set. The paper describes a medical interpretation case in the feature space, concerning human skin lesions. The samples of the training set, supplied by the dermatologists, allow the system to learn models of lesions in terms of features such as hue factor, asymmetry factor, and asperity factor. The comparison of the visual data with the model derives the trend of image transformations, allowing a better definition of the given image and its classification. The algorithms are implemented in C language on a PC equipped with Matrox Image Series IM-1280 acquisition and processing boards. The work is now in progress.

  14. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  15. A novel polar-based human face recognition computational model

    Directory of Open Access Journals (Sweden)

    Y. Zana

    2009-07-01

    Full Text Available Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing.

  16. Introduction to computer image processing

    Science.gov (United States)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  17. Iris image enhancement for feature recognition and extraction

    CSIR Research Space (South Africa)

    Mabuza, GP

    2012-10-01

    Full Text Available Gonzalez, R.C. and Woods, R.E. 2002. Digital Image Processing 2nd Edition, Instructor?s manual .Englewood Cliffs, Prentice Hall, pp 17-36. Proen?a, H. and Alexandre, L.A. 2007. Toward Noncooperative Iris Recognition: A classification approach using... multiple signatures. IEEE Transactions on Pattern Analysis and Machine Intelligence. IEEE Computer Society, 29 (4): 607-611. Sazonova, N. and Schuckers, S. 2011. Fast and efficient iris image enhancement using logarithmic image processing. Biometric...

  18. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  19. Neural networks for data compression and invariant image recognition

    Science.gov (United States)

    Gardner, Sheldon

    1989-01-01

    An approach to invariant image recognition (I2R), based upon a model of biological vision in the mammalian visual system (MVS), is described. The complete I2R model incorporates several biologically inspired features: exponential mapping of retinal images, Gabor spatial filtering, and a neural network associative memory. In the I2R model, exponentially mapped retinal images are filtered by a hierarchical set of Gabor spatial filters (GSF) which provide compression of the information contained within a pixel-based image. A neural network associative memory (AM) is used to process the GSF coded images. We describe a 1-D shape function method for coding of scale and rotationally invariant shape information. This method reduces image shape information to a periodic waveform suitable for coding as an input vector to a neural network AM. The shape function method is suitable for near term applications on conventional computing architectures equipped with VLSI FFT chips to provide a rapid image search capability.

  20. Deep learning for polyp recognition in wireless capsule endoscopy images.

    Science.gov (United States)

    Yuan, Yixuan; Meng, Max Q-H

    2017-04-01

    Wireless capsule endoscopy (WCE) enables physicians to examine the digestive tract without any surgical operations, at the cost of a large volume of images to be analyzed. In the computer-aided diagnosis of WCE images, the main challenge arises from the difficulty of robust characterization of images. This study aims to provide discriminative description of WCE images and assist physicians to recognize polyp images automatically. We propose a novel deep feature learning method, named stacked sparse autoencoder with image manifold constraint (SSAEIM), to recognize polyps in the WCE images. Our SSAEIM differs from the traditional sparse autoencoder (SAE) by introducing an image manifold constraint, which is constructed by a nearest neighbor graph and represents intrinsic structures of images. The image manifold constraint enforces that images within the same category share similar learned features and images in different categories should be kept far away. Thus, the learned features preserve large intervariances and small intravariances among images. The average overall recognition accuracy (ORA) of our method for WCE images is 98.00%. The accuracies for polyps, bubbles, turbid images, and clear images are 98.00%, 99.50%, 99.00%, and 95.50%, respectively. Moreover, the comparison results show that our SSAEIM outperforms existing polyp recognition methods with relative higher ORA. The comprehensive results have demonstrated that the proposed SSAEIM can provide descriptive characterization for WCE images and recognize polyps in a WCE video accurately. This method could be further utilized in the clinical trials to help physicians from the tedious image reading work. © 2017 American Association of Physicists in Medicine.

  1. Computing Intrinsic Images.

    Science.gov (United States)

    1986-08-01

    illumination from the imaged scene. In other words the starting point is cine or video imagery. The first step in the computation, according to all the existing...this optical flow field, leading researchers in the field are starting to realize that computation of optical flow is a utopia [Horn, 1986]. So, in

  2. Computational cameras for moving iris recognition

    Science.gov (United States)

    McCloskey, Scott; Venkatesha, Sharath

    2015-05-01

    Iris-based biometric identification is increasingly used for facility access and other security applications. Like all methods that exploit visual information, however, iris systems are limited by the quality of captured images. Optical defocus due to a small depth of field (DOF) is one such challenge, as is the acquisition of sharply-focused iris images from subjects in motion. This manuscript describes the application of computational motion-deblurring cameras to the problem of moving iris capture, from the underlying theory to system considerations and performance data.

  3. Searching for pulsars using image pattern recognition

    International Nuclear Information System (INIS)

    Zhu, W. W.; Berndsen, A.; Madsen, E. C.; Tan, M.; Stairs, I. H.; Brazier, A.; Lazarus, P.; Lynch, R.; Scholz, P.; Stovall, K.; Cohen, S.; Dartez, L. P.; Lunsford, G.; Martinez, J. G.; Mata, A.; Ransom, S. M.; Banaszak, S.; Biwer, C. M.; Flanigan, J.; Rohr, M.

    2014-01-01

    In the modern era of big data, many fields of astronomy are generating huge volumes of data, the analysis of which can sometimes be the limiting factor in research. Fortunately, computer scientists have developed powerful data-mining techniques that can be applied to various fields. In this paper, we present a novel artificial intelligence (AI) program that identifies pulsars from recent surveys by using image pattern recognition with deep neural nets—the PICS (Pulsar Image-based Classification System) AI. The AI mimics human experts and distinguishes pulsars from noise and interference by looking for patterns from candidate plots. Different from other pulsar selection programs that search for expected patterns, the PICS AI is taught the salient features of different pulsars from a set of human-labeled candidates through machine learning. The training candidates are collected from the Pulsar Arecibo L-band Feed Array (PALFA) survey. The information from each pulsar candidate is synthesized in four diagnostic plots, which consist of image data with up to thousands of pixels. The AI takes these data from each candidate as its input and uses thousands of such candidates to train its ∼9000 neurons. The deep neural networks in this AI system grant it superior ability to recognize various types of pulsars as well as their harmonic signals. The trained AI's performance has been validated with a large set of candidates from a different pulsar survey, the Green Bank North Celestial Cap survey. In this completely independent test, the PICS ranked 264 out of 277 pulsar-related candidates, including all 56 previously known pulsars and 208 of their harmonics, in the top 961 (1%) of 90,008 test candidates, missing only 13 harmonics. The first non-pulsar candidate appears at rank 187, following 45 pulsars and 141 harmonics. In other words, 100% of the pulsars were ranked in the top 1% of all candidates, while 80% were ranked higher than any noise or interference. The

  4. Searching for pulsars using image pattern recognition

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, W. W.; Berndsen, A.; Madsen, E. C.; Tan, M.; Stairs, I. H. [Department of Physics and Astronomy, 6224 Agricultural Road, University of British Columbia, Vancouver, BC, V6T 1Z1 (Canada); Brazier, A. [Astronomy Department, Cornell University, Ithaca, NY 14853 (United States); Lazarus, P. [Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn (Germany); Lynch, R.; Scholz, P. [Department of Physics, McGill University, Montreal, QC H3A 2T8 (Canada); Stovall, K.; Cohen, S.; Dartez, L. P.; Lunsford, G.; Martinez, J. G.; Mata, A. [Center for Advanced Radio Astronomy, University of Texas at Brownsville, Brownsville, TX 78520 (United States); Ransom, S. M. [NRAO, Charlottesville, VA 22903 (United States); Banaszak, S.; Biwer, C. M.; Flanigan, J.; Rohr, M., E-mail: zhuww@phas.ubc.ca, E-mail: berndsen@phas.ubc.ca [Center for Gravitation, Cosmology and Astrophysics. University of Wisconsin Milwaukee, Milwaukee, WI 53211 (United States); and others

    2014-02-01

    In the modern era of big data, many fields of astronomy are generating huge volumes of data, the analysis of which can sometimes be the limiting factor in research. Fortunately, computer scientists have developed powerful data-mining techniques that can be applied to various fields. In this paper, we present a novel artificial intelligence (AI) program that identifies pulsars from recent surveys by using image pattern recognition with deep neural nets—the PICS (Pulsar Image-based Classification System) AI. The AI mimics human experts and distinguishes pulsars from noise and interference by looking for patterns from candidate plots. Different from other pulsar selection programs that search for expected patterns, the PICS AI is taught the salient features of different pulsars from a set of human-labeled candidates through machine learning. The training candidates are collected from the Pulsar Arecibo L-band Feed Array (PALFA) survey. The information from each pulsar candidate is synthesized in four diagnostic plots, which consist of image data with up to thousands of pixels. The AI takes these data from each candidate as its input and uses thousands of such candidates to train its ∼9000 neurons. The deep neural networks in this AI system grant it superior ability to recognize various types of pulsars as well as their harmonic signals. The trained AI's performance has been validated with a large set of candidates from a different pulsar survey, the Green Bank North Celestial Cap survey. In this completely independent test, the PICS ranked 264 out of 277 pulsar-related candidates, including all 56 previously known pulsars and 208 of their harmonics, in the top 961 (1%) of 90,008 test candidates, missing only 13 harmonics. The first non-pulsar candidate appears at rank 187, following 45 pulsars and 141 harmonics. In other words, 100% of the pulsars were ranked in the top 1% of all candidates, while 80% were ranked higher than any noise or interference. The

  5. Image simulation for automatic license plate recognition

    Science.gov (United States)

    Bala, Raja; Zhao, Yonghui; Burry, Aaron; Kozitsky, Vladimir; Fillion, Claude; Saunders, Craig; Rodríguez-Serrano, José

    2012-01-01

    Automatic license plate recognition (ALPR) is an important capability for traffic surveillance applications, including toll monitoring and detection of different types of traffic violations. ALPR is a multi-stage process comprising plate localization, character segmentation, optical character recognition (OCR), and identification of originating jurisdiction (i.e. state or province). Training of an ALPR system for a new jurisdiction typically involves gathering vast amounts of license plate images and associated ground truth data, followed by iterative tuning and optimization of the ALPR algorithms. The substantial time and effort required to train and optimize the ALPR system can result in excessive operational cost and overhead. In this paper we propose a framework to create an artificial set of license plate images for accelerated training and optimization of ALPR algorithms. The framework comprises two steps: the synthesis of license plate images according to the design and layout for a jurisdiction of interest; and the modeling of imaging transformations and distortions typically encountered in the image capture process. Distortion parameters are estimated by measurements of real plate images. The simulation methodology is successfully demonstrated for training of OCR.

  6. From Digital Imaging to Computer Image Analysis of Fine Art

    Science.gov (United States)

    Stork, David G.

    An expanding range of techniques from computer vision, pattern recognition, image analysis, and computer graphics are being applied to problems in the history of art. The success of these efforts is enabled by the growing corpus of high-resolution multi-spectral digital images of art (primarily paintings and drawings), sophisticated computer vision methods, and most importantly the engagement of some art scholars who bring questions that may be addressed through computer methods. This paper outlines some general problem areas and opportunities in this new inter-disciplinary research program.

  7. Object detection and recognition in digital images theory and practice

    CERN Document Server

    Cyganek, Boguslaw

    2013-01-01

    Object detection, tracking and recognition in images are key problems in computer vision. This book provides the reader with a balanced treatment between the theory and practice of selected methods in these areas to make the book accessible to a range of researchers, engineers, developers and postgraduate students working in computer vision and related fields. Key features: Explains the main theoretical ideas behind each method (which are augmented with a rigorous mathematical derivation of the formulas), their implementation (in C++) and demonstrated working in real applications.

  8. Blurred image recognition by legendre moment invariants

    Science.gov (United States)

    Zhang, Hui; Shu, Huazhong; Han, Guo-Niu; Coatrieux, Gouenou; Luo, Limin; Coatrieux, Jean-Louis

    2010-01-01

    Processing blurred images is a key problem in many image applications. Existing methods to obtain blur invariants which are invariant with respect to centrally symmetric blur are based on geometric moments or complex moments. In this paper, we propose a new method to construct a set of blur invariants using the orthogonal Legendre moments. Some important properties of Legendre moments for the blurred image are presented and proved. The performance of the proposed descriptors is evaluated with various point-spread functions and different image noises. The comparison of the present approach with previous methods in terms of pattern recognition accuracy is also provided. The experimental results show that the proposed descriptors are more robust to noise and have better discriminative power than the methods based on geometric or complex moments. PMID:19933003

  9. Incremental support vector machines for fast reliable image recognition

    Energy Technology Data Exchange (ETDEWEB)

    Makili, L., E-mail: makili_le@yahoo.com [Instituto Superior Politécnico da Universidade Katyavala Bwila, Benguela (Angola); Vega, J. [Asociación EURATOM/CIEMAT para Fusión, Madrid (Spain); Dormido-Canto, S. [Dpto. Informática y Automática – UNED, Madrid (Spain)

    2013-10-15

    Highlights: ► A conformal predictor using SVM as the underlying algorithm was implemented. ► It was applied to image recognition in the TJ–II's Thomson Scattering Diagnostic. ► To improve time efficiency an approach to incremental SVM training has been used. ► Accuracy is similar to the one reached when standard SVM is used. ► Computational time saving is significant for large training sets. -- Abstract: This paper addresses the reliable classification of images in a 5-class problem. To this end, an automatic recognition system, based on conformal predictors and using Support Vector Machines (SVM) as the underlying algorithm has been developed and applied to the recognition of images in the Thomson Scattering Diagnostic of the TJ–II fusion device. Using such conformal predictor based classifier is a computationally intensive task since it implies to train several SVM models to classify a single example and to perform this training from scratch takes a significant amount of time. In order to improve the classification time efficiency, an approach to the incremental training of SVM has been used as the underlying algorithm. Experimental results show that the overall performance of the new classifier is high, comparable to the one corresponding to the use of standard SVM as the underlying algorithm and there is a significant improvement in time efficiency.

  10. Topics in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2013-01-01

      The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery.   Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation.   The book brings together the current state-of-the-art in the various mul...

  11. Interpretation techniques. [image enhancement and pattern recognition

    Science.gov (United States)

    Dragg, J. L.

    1974-01-01

    The image enhancement and geometric correction and registration techniques developed and/or demonstrated on ERTS data are relatively mature and greatly enhance the utility of the data for a large variety of users. Pattern recognition was improved by the use of signature extension, feature extension, and other classification techniques. Many of these techniques need to be developed and generalized to become operationally useful. Advancements in the mass precision processing of ERTS were demonstrated, providing the hope for future earth resources data to be provided in a more readily usable state. Also in evidence is an increasing and healthy interaction between the techniques developers and the user/applications investigators.

  12. Enhancing global positioning by image recognition

    OpenAIRE

    Marimon Sanjuan, David; Adamek, Tomasz; Bonnin, Arturo; Trzcinski, Tomasz

    2011-01-01

    Current commercial outdoor Mobile AR applications rely mostly on GPS antennas, digital compasses and accelerometers. Due to imprecise readings, the 2D placement of points of interest (POI) on the display can be uncorrelated with reality. We present a novel method to geo-locate a mobile device by rec- ognizing what is captured by its camera. A visual recognition algo- rithm in the cloud is used to identify geo-located reference images that match the camera’s view. Upon correct identification, ...

  13. Convolutional Neural Network for Image Recognition

    CERN Document Server

    Seifnashri, Sahand

    2015-01-01

    The aim of this project is to use machine learning techniques especially Convolutional Neural Networks for image processing. These techniques can be used for Quark-Gluon discrimination using calorimeters data, but unfortunately I didn’t manage to get the calorimeters data and I just used the Jet data fromminiaodsim(ak4 chs). The Jet data was not good enough for Convolutional Neural Network which is designed for ’image’ recognition. This report is made of twomain part, part one is mainly about implementing Convolutional Neural Network on unphysical data such as MNIST digits and CIFAR-10 dataset and part 2 is about the Jet data.

  14. Gait Recognition Using Image Self-Similarity

    Directory of Open Access Journals (Sweden)

    Chiraz BenAbdelkader

    2004-04-01

    Full Text Available Gait is one of the few biometrics that can be measured at a distance, and is hence useful for passive surveillance as well as biometric applications. Gait recognition research is still at its infancy, however, and we have yet to solve the fundamental issue of finding gait features which at once have sufficient discrimination power and can be extracted robustly and accurately from low-resolution video. This paper describes a novel gait recognition technique based on the image self-similarity of a walking person. We contend that the similarity plot encodes a projection of gait dynamics. It is also correspondence-free, robust to segmentation noise, and works well with low-resolution video. The method is tested on multiple data sets of varying sizes and degrees of difficulty. Performance is best for fronto-parallel viewpoints, whereby a recognition rate of 98% is achieved for a data set of 6 people, and 70% for a data set of 54 people.

  15. The Application of Fractal Theory in Image Recognition

    Directory of Open Access Journals (Sweden)

    Qiu Li

    2014-04-01

    Full Text Available At present, technicians are constantly exploring how to do effective managements and convenient and efficient queries on the large number of images in database. This article puts forward a new idea that doing the image retrieval with the similarity characteristics of fractal theory. It makes image similarity verification with the method of Opency image histogram and makes explanations by using the application of fractal theory in image pattern recognition. Fractal theory has provided a new method for the methods of image pattern recognition, the recognition research on related images and the classification of huge image database.

  16. Image analysis in automatic system of pollen recognition

    Directory of Open Access Journals (Sweden)

    Piotr Rapiejko

    2012-12-01

    Full Text Available In allergology practice and research, it would be convenient to receive pollen identification and monitoring results in much shorter time than it comes from human identification. Image based analysis is one of the approaches to an automated identification scheme for pollen grain and pattern recognition on such images is widely used as a powerful tool. The goal of such attempt is to provide accurate, fast recognition and classification and counting of pollen grains by computer system for monitoring. The isolated pollen grain are objects extracted from microscopic image by CCD camera and PC computer under proper conditions for further analysis. The algorithms are based on the knowledge from feature vector analysis of estimated parameters calculated from grain characteristics, including morphological features, surface features and other applicable estimated characteristics. Segmentation algorithms specially tailored to pollen object characteristics provide exact descriptions of pollen characteristics (border and internal features already used by human expert. The specific characteristics and its measures are statistically estimated for each object. Some low level statistics for estimated local and global measures of the features establish the feature space. Some special care should be paid on choosing these feature and on constructing the feature space to optimize the number of subspaces for higher recognition rates in low-level classification for type differentiation of pollen grains.The results of estimated parameters of feature vector in low dimension space for some typical pollen types are presented, as well as some effective and fast recognition results of performed experiments for different pollens. The findings show the ewidence of using proper chosen estimators of central and invariant moments (M21, NM2, NM3, NM8 NM9, of tailored characteristics for good enough classification measures (efficiency > 95%, even for low dimensional classifiers

  17. Autonomous facial recognition system inspired by human visual system based logarithmical image visualization technique

    Science.gov (United States)

    Wan, Qianwen; Panetta, Karen; Agaian, Sos

    2017-05-01

    Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.

  18. Pattern recognition and modelling of earthquake registrations with interactive computer support

    International Nuclear Information System (INIS)

    Manova, Katarina S.

    2004-01-01

    The object of the thesis is Pattern Recognition. Pattern recognition i.e. classification, is applied in many fields: speech recognition, hand printed character recognition, medical analysis, satellite and aerial-photo interpretations, biology, computer vision, information retrieval and so on. In this thesis is studied its applicability in seismology. Signal classification is an area of great importance in a wide variety of applications. This thesis deals with the problem of (automatic) classification of earthquake signals, which are non-stationary signals. Non-stationary signal classification is an area of active research in the signal and image processing community. The goal of the thesis is recognition of earthquake signals according to their epicentral zone. Source classification i.e. recognition is based on transformation of seismograms (earthquake registrations) to images, via time-frequency transformations, and applying image processing and pattern recognition techniques for feature extraction, classification and recognition. The tested data include local earthquakes from seismic regions in Macedonia. By using actual seismic data it is shown that proposed methods provide satisfactory results for classification and recognition.(Author)

  19. Optical time-domain analog pattern correlator for high-speed real-time image recognition.

    Science.gov (United States)

    Kim, Sang Hyup; Goda, Keisuke; Fard, Ali; Jalali, Bahram

    2011-01-15

    The speed of image processing is limited by image acquisition circuitry. While optical pattern recognition techniques can reduce the computational burden on digital image processing, their image correlation rates are typically low due to the use of spatial optical elements. Here we report a method that overcomes this limitation and enables fast real-time analog image recognition at a record correlation rate of 36.7 MHz--1000 times higher rates than conventional methods. This technique seamlessly performs image acquisition, correlation, and signal integration all optically in the time domain before analog-to-digital conversion by virtue of optical space-to-time mapping.

  20. Target recognition of log-polar ladar range images using moment invariants

    Science.gov (United States)

    Xia, Wenze; Han, Shaokun; Cao, Jie; Yu, Haoyong

    2017-01-01

    The ladar range image has received considerable attentions in the automatic target recognition field. However, previous research does not cover target recognition using log-polar ladar range images. Therefore, we construct a target recognition system based on log-polar ladar range images in this paper. In this system combined moment invariants and backpropagation neural network are selected as shape descriptor and shape classifier, respectively. In order to fully analyze the effect of log-polar sampling pattern on recognition result, several comparative experiments based on simulated and real range images are carried out. Eventually, several important conclusions are drawn: (i) if combined moments are computed directly by log-polar range images, translation, rotation and scaling invariant properties of combined moments will be invalid (ii) when object is located in the center of field of view, recognition rate of log-polar range images is less sensitive to the changing of field of view (iii) as object position changes from center to edge of field of view, recognition performance of log-polar range images will decline dramatically (iv) log-polar range images has a better noise robustness than Cartesian range images. Finally, we give a suggestion that it is better to divide field of view into recognition area and searching area in the real application.

  1. Towards The Deep Model : Understanding Visual Recognition Through Computational Models

    OpenAIRE

    Wang, Panqu

    2017-01-01

    Understanding how visual recognition is achieved in the human brain is one of the most fundamental questions in vision research. In this thesis I seek to tackle this problem from a neurocomputational modeling perspective. More specifically, I build machine learning-based models to simulate and explain cognitive phenomena related to human visual recognition, and I improve computational models using brain-inspired principles to excel at computer vision tasks.I first describe how a neurocomputat...

  2. Image based book cover recognition and retrieval

    Science.gov (United States)

    Sukhadan, Kalyani; Vijayarajan, V.; Krishnamoorthi, A.; Bessie Amali, D. Geraldine

    2017-11-01

    In this we are developing a graphical user interface using MATLAB for the users to check the information related to books in real time. We are taking the photos of the book cover using GUI, then by using MSER algorithm it will automatically detect all the features from the input image, after this it will filter bifurcate non-text features which will be based on morphological difference between text and non-text regions. We implemented a text character alignment algorithm which will improve the accuracy of the original text detection. We will also have a look upon the built in MATLAB OCR recognition algorithm and an open source OCR which is commonly used to perform better detection results, post detection algorithm is implemented and natural language processing to perform word correction and false detection inhibition. Finally, the detection result will be linked to internet to perform online matching. More than 86% accuracy can be obtained by this algorithm.

  3. A connectionist computational method for face recognition

    Directory of Open Access Journals (Sweden)

    Pujol Francisco A.

    2016-06-01

    Full Text Available In this work, a modified version of the elastic bunch graph matching (EBGM algorithm for face recognition is introduced. First, faces are detected by using a fuzzy skin detector based on the RGB color space. Then, the fiducial points for the facial graph are extracted automatically by adjusting a grid of points to the result of an edge detector. After that, the position of the nodes, their relation with their neighbors and their Gabor jets are calculated in order to obtain the feature vector defining each face. A self-organizing map (SOM framework is shown afterwards. Thus, the calculation of the winning neuron and the recognition process are performed by using a similarity function that takes into account both the geometric and texture information of the facial graph. The set of experiments carried out for our SOM-EBGM method shows the accuracy of our proposal when compared with other state-of the-art methods.

  4. Three-dimensional object recognition via integral imaging and scale invariant feature transform

    Science.gov (United States)

    Yi, Faliu; Moon, Inkyu

    2014-06-01

    We propose a three-dimensional (3D) object recognition approach via computational integral imaging and scale invariant feature transform (SIFT) that can be invariance to object changes in illumination, scale, rotation and affine. Usually, the matching between features extracted in reference object and that in computationally reconstructed image should be done for 3D object recognition. However, this process needs to alternately illustrate all of the depth images first which will affect the recognition efficiency. Considering that there are a set of elemental images with different viewpoint in integral imaging, we first recognize the object in 2D image by using five elemental images and then choose one elemental image with the most matching points from the five images. This selected image will include more information related to the reference object. Finally, we can use this selected elemental image and its neighboring elemental images which should also contain much reference object information to calculate the disparity with SIFT algorithm. Consequently, the depth of the 3D object can be achieved with stereo camera theory and the recognized 3D object can be reconstructed in computational integral imaging. This method sufficiently utilizes the different information provided by elemental images and the robust feature extraction SIFT algorithm to recognize 3D objects.

  5. Thyroid nodule recognition in computed tomography using first order statistics.

    Science.gov (United States)

    Peng, Wenxian; Liu, Chenbin; Xia, Shunren; Shao, Dangdang; Chen, Yihong; Liu, Rui; Zhang, Zhiping

    2017-06-02

    Computed tomography (CT) is one of the popular tools for early detection of thyroid nodule. The pixel intensity of thyroid in CT image is very important information to distinguish nodule from normal thyroid tissue. The pixel intensity in normal thyroid tissues is homogeneous and smooth. In the benign or malignant nodules, the pixel intensity is heterogeneous. Several studies have shown that the first order features in ultrasound image can be used as imaging biomarkers in nodule recognition. In this paper, we investigate the feasibility of utilizing the first order texture features to identify nodule from normal thyroid tissue in CT image. A total of 284 thyroid CT images from 113 patients were collected in this study. We used 150 healthy controlled thyroid CT images from 55 patients and 134 nodule images (50 malignant and 84 benign nodules) from 58 patients who have undergone thyroid surgery. The final diagnosis was confirmed by histopathological examinations. In the presented method, first, regions of interest (ROIs) from axial non-enhancement CT images were delineated manually by a radiologist. Second, average, median, and wiener filter were applied to reduce photon noise before feature extraction. The first-order texture features, including entropy, uniformity, average intensity, standard deviation, kurtosis and skewness were calculated from each ROI. Third, support vector machine analysis was applied for classification. Several statistical values were calculated to evaluate the performance of the presented method, which includes accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and area of under receiver operating characteristic curve (AUC). The entropy, uniformity, mean intensity, standard deviation, skewness (P < 0.05), except kurtosis (P = 0.104) of thyroid tissue with nodules have a significant difference from those of normal thyroid tissue. The optimal classification was obtained from the presented

  6. Image Recognition Techniques for Earthquake Early Warning

    Science.gov (United States)

    Boese, M.; Heaton, T. H.; Hauksson, E.

    2011-12-01

    When monitoring on his/her PC a map of seismic stations, whose colors scale with the real-time transmitted ground motions amplitudes observed in a dense seismic network, an experienced person will fairly easily recognize when and where an earthquake occurs. Using the maximum amplitudes at stations at close epicentral distances, he/she might even be able to roughly estimate the size of the event. From the number and distribution of stations turning 'red', the person might also be able to recognize the rupturing fault in a large earthquake (M>>7.0), and to estimate the rupture dimensions while the rupture is still developing. Following this concept, we are adopting techniques for automatic image recognition to provide earthquake early warning. We rapidly correlate a set of templates with real-time ground motion observations in a seismic network. If a 'suspicious' pattern of ground motion amplitudes is detected, the algorithm starts estimating the location of the earthquake and its magnitude. For large earthquakes the algorithm estimates finite source dimensions and the direction of rupture propagation. These predictions are continuously up-dated using the current 'image' of ground motion observations. A priori information, such as on the orientation of mayor faults, helps enhancing estimates in less dense networks. The approach will be demonstrated for multiple simulated and real events in California.

  7. Face Spoof Attack Recognition Using Discriminative Image Patches

    Directory of Open Access Journals (Sweden)

    Zahid Akhtar

    2016-01-01

    Full Text Available Face recognition systems are now being used in many applications such as border crossings, banks, and mobile payments. The wide scale deployment of facial recognition systems has attracted intensive attention to the reliability of face biometrics against spoof attacks, where a photo, a video, or a 3D mask of a genuine user’s face can be used to gain illegitimate access to facilities or services. Though several face antispoofing or liveness detection methods (which determine at the time of capture whether a face is live or spoof have been proposed, the issue is still unsolved due to difficulty in finding discriminative and computationally inexpensive features and methods for spoof attacks. In addition, existing techniques use whole face image or complete video for liveness detection. However, often certain face regions (video frames are redundant or correspond to the clutter in the image (video, thus leading generally to low performances. Therefore, we propose seven novel methods to find discriminative image patches, which we define as regions that are salient, instrumental, and class-specific. Four well-known classifiers, namely, support vector machine (SVM, Naive-Bayes, Quadratic Discriminant Analysis (QDA, and Ensemble, are then used to distinguish between genuine and spoof faces using a voting based scheme. Experimental analysis on two publicly available databases (Idiap REPLAY-ATTACK and CASIA-FASD shows promising results compared to existing works.

  8. Registration and recognition in images and videos

    CERN Document Server

    Battiato, Sebastiano; Farinella, Giovanni

    2014-01-01

    Computer vision is the science and technology of making machines that see. It is concerned with the theory, design and implementation of algorithms that can automatically process visual data to recognize objects, track and recover their shape and spatial layout. The International Computer Vision Summer School - ICVSS was established in 2007 to provide both an objective and clear overview and an in-depth analysis of the state-of-the-art  research in Computer Vision. The courses are delivered by world renowned experts in the field, from both academia and industry, and cover both theoretical and practical aspects of real Computer Vision problems.  The school is organized every year by University of Cambridge (Computer Vision and Robotics Group) and University of Catania (Image Processing Lab). Different topics are covered each year.This edited volume contains a selection of articles covering some of the talks and tutorials held during the last editions of the school. The chapters provide an in-depth overview o...

  9. Chinese Herbal Medicine Image Recognition and Retrieval by Convolutional Neural Network.

    Science.gov (United States)

    Sun, Xin; Qian, Huinan

    2016-01-01

    Chinese herbal medicine image recognition and retrieval have great potential of practical applications. Several previous studies have focused on the recognition with hand-crafted image features, but there are two limitations in them. Firstly, most of these hand-crafted features are low-level image representation, which is easily affected by noise and background. Secondly, the medicine images are very clean without any backgrounds, which makes it difficult to use in practical applications. Therefore, designing high-level image representation for recognition and retrieval in real world medicine images is facing a great challenge. Inspired by the recent progress of deep learning in computer vision, we realize that deep learning methods may provide robust medicine image representation. In this paper, we propose to use the Convolutional Neural Network (CNN) for Chinese herbal medicine image recognition and retrieval. For the recognition problem, we use the softmax loss to optimize the recognition network; then for the retrieval problem, we fine-tune the recognition network by adding a triplet loss to search for the most similar medicine images. To evaluate our method, we construct a public database of herbal medicine images with cluttered backgrounds, which has in total 5523 images with 95 popular Chinese medicine categories. Experimental results show that our method can achieve the average recognition precision of 71% and the average retrieval precision of 53% over all the 95 medicine categories, which are quite promising given the fact that the real world images have multiple pieces of occluded herbal and cluttered backgrounds. Besides, our proposed method achieves the state-of-the-art performance by improving previous studies with a large margin.

  10. Recognition of landforms from digital elevation models and satellite imagery with expert systems, pattern recognition and image processing techniques

    OpenAIRE

    Miliaresis, George

    2014-01-01

    Recognition of landforms from digital elevation models and satellite imagery with expert systems, pattern recognition and image processing techniques. PhD Thesis, Remote Sensing & Terrain Pattern Recognition),National Technical University of Athens, Dpt. of Topography (2000).

  11. Recognition of Deictic Gestures for Wearable Computing

    DEFF Research Database (Denmark)

    Moeslund, Thomas B.; Nørgaard, Lau

    2006-01-01

    In modern society there is an increasing demand to access, record and manipulate large amounts of information. This has inspired a new approach to thinking about and designing personal computers, where the ultimate goal is to produce a truly wearable computer. In this work we present a non...

  12. Parallel computing-based sclera recognition for human identification

    Science.gov (United States)

    Lin, Yong; Du, Eliza Y.; Zhou, Zhi

    2012-06-01

    Compared to iris recognition, sclera recognition which uses line descriptor can achieve comparable recognition accuracy in visible wavelengths. However, this method is too time-consuming to be implemented in a real-time system. In this paper, we propose a GPU-based parallel computing approach to reduce the sclera recognition time. We define a new descriptor in which the information of KD tree structure and sclera edge are added. Registration and matching task is divided into subtasks in various sizes according to their computation complexities. Every affine transform parameters are generated by searching on KD tree. Texture memory, constant memory, and shared memory are used to store templates and transform matrixes. The experiment results show that the proposed method executed on GPU can dramatically improve the sclera matching speed in hundreds of times without accuracy decreasing.

  13. A Computer-Based Gaming System for Assessing Recognition Performance (RECOG).

    Science.gov (United States)

    Little, Glenn A.; And Others

    This report documents a computer-based gaming system for assessing recognition performance (RECOG). The game management system is programmed in a modular manner to: instruct the student on how to play the game, retrieve and display individual images, keep track of how well individuals play and provide them feedback, and link these components by…

  14. Multiscale vector fields for image pattern recognition

    Science.gov (United States)

    Low, Kah-Chan; Coggins, James M.

    1990-01-01

    A uniform processing framework for low-level vision computing in which a bank of spatial filters maps the image intensity structure at each pixel into an abstract feature space is proposed. Some properties of the filters and the feature space are described. Local orientation is measured by a vector sum in the feature space as follows: each filter's preferred orientation along with the strength of the filter's output determine the orientation and the length of a vector in the feature space; the vectors for all filters are summed to yield a resultant vector for a particular pixel and scale. The orientation of the resultant vector indicates the local orientation, and the magnitude of the vector indicates the strength of the local orientation preference. Limitations of the vector sum method are discussed. Investigations show that the processing framework provides a useful, redundant representation of image structure across orientation and scale.

  15. Advanced techniques in digital mammographic images recognition

    International Nuclear Information System (INIS)

    Aliu, R. Azir

    2011-01-01

    Computer Aided Detection and Diagnosis is used in digital radiography as a second thought in the process of determining diagnoses, which reduces the percentage of wrong diagnoses of the established interpretation of mammographic images. The issues that are discussed in the dissertation are the analyses and improvement of advanced technologies in the field of artificial intelligence, more specifically in the field of machine learning for solving diagnostic problems and automatic detection of speculated lesions in digital mammograms. The developed of SVM-based ICAD system with cascade architecture for analyses and comparison mammographic images in both projections (CC and MLO) gives excellent result for detection and masses and microcalcifications. In order to develop a system with optimal performances of sensitivity, specificity and time complexity, a set of relevant characteristics need to be created which will show all the pathological regions that might be present in the mammographic image. The structure of the mammographic image, size and the large number of pathological structures in this area are the reason why the creation of a set of these features is necessary for the presentation of good indicators. These pathological structures are a real challenge today and the world of science is working in that direction. The doctoral dissertation showed that the system has optimal results, which are confirmed by experts, and institutions, which are dealing with these same issues. Also, the thesis presents a new approach for automatic identification of regions of interest in the mammographic image where regions of interest are automatically selected for further processing mammography in cases when the number of examined patients is higher. Out of 480 mammographic images downloaded from MIAS database and tested with ICAD system the author shows that, after separation and selection of relevant features of ICAD system the accuracy is 89.7% (96.4% for microcalcifications

  16. Image dependency in the recognition of newly learnt faces.

    Science.gov (United States)

    Longmore, Christopher A; Santos, Isabel M; Silva, Carlos F; Hall, Abi; Faloyin, Dipo; Little, Emily

    2017-05-01

    Research investigating the effect of lighting and viewpoint changes on unfamiliar and newly learnt faces has revealed that such recognition is highly image dependent and that changes in either of these leads to poor recognition accuracy. Three experiments are reported to extend these findings by examining the effect of apparent age on the recognition of newly learnt faces. Experiment 1 investigated the ability to generalize to novel ages of a face after learning a single image. It was found that recognition was best for the learnt image with performance falling the greater the dissimilarity between the study and test images. Experiments 2 and 3 examined whether learning two images aids subsequent recognition of a novel image. The results indicated that interpolation between two studied images (Experiment 2) provided some additional benefit over learning a single view, but that this did not extend to extrapolation (Experiment 3). The results from all studies suggest that recognition was driven primarily by pictorial codes and that the recognition of faces learnt from a limited number of sources operates on stored images of faces as opposed to more abstract, structural, representations.

  17. Soft Computing Applications in Optimization, Control, and Recognition

    CERN Document Server

    Castillo, Oscar

    2013-01-01

    Soft computing includes several intelligent computing paradigms, like fuzzy logic, neural networks, and bio-inspired optimization algorithms. This book describes the application of soft computing techniques to intelligent control, pattern recognition, and optimization problems. The book is organized in four main parts. The first part deals with nature-inspired optimization methods and their applications. Papers included in this part propose new models for achieving intelligent optimization in different application areas. The second part discusses hybrid intelligent systems for achieving control. Papers included in this part make use of nature-inspired techniques, like evolutionary algorithms, fuzzy logic and neural networks, for the optimal design of intelligent controllers for different kind of applications. Papers in the third part focus on intelligent techniques for pattern recognition and propose new methods to solve complex pattern recognition problems. The fourth part discusses new theoretical concepts ...

  18. Superpixel-Based Feature for Aerial Image Scene Recognition

    Directory of Open Access Journals (Sweden)

    Hongguang Li

    2018-01-01

    Full Text Available Image scene recognition is a core technology for many aerial remote sensing applications. Different landforms are inputted as different scenes in aerial imaging, and all landform information is regarded as valuable for aerial image scene recognition. However, the conventional features of the Bag-of-Words model are designed using local points or other related information and thus are unable to fully describe landform areas. This limitation cannot be ignored when the aim is to ensure accurate aerial scene recognition. A novel superpixel-based feature is proposed in this study to characterize aerial image scenes. Then, based on the proposed feature, a scene recognition method of the Bag-of-Words model for aerial imaging is designed. The proposed superpixel-based feature that utilizes landform information establishes top-task superpixel extraction of landforms to bottom-task expression of feature vectors. This characterization technique comprises the following steps: simple linear iterative clustering based superpixel segmentation, adaptive filter bank construction, Lie group-based feature quantification, and visual saliency model-based feature weighting. Experiments of image scene recognition are carried out using real image data captured by an unmanned aerial vehicle (UAV. The recognition accuracy of the proposed superpixel-based feature is 95.1%, which is higher than those of scene recognition algorithms based on other local features.

  19. Optical character recognition of camera-captured images based on phase features

    Science.gov (United States)

    Diaz-Escobar, Julia; Kober, Vitaly

    2015-09-01

    Nowadays most of digital information is obtained using mobile devices specially smartphones. In particular, it brings the opportunity for optical character recognition in camera-captured images. For this reason many recognition applications have been recently developed such as recognition of license plates, business cards, receipts and street signal; document classification, augmented reality, language translator and so on. Camera-captured images are usually affected by geometric distortions, nonuniform illumination, shadow, noise, which make difficult the recognition task with existing systems. It is well known that the Fourier phase contains a lot of important information regardless of the Fourier magnitude. So, in this work we propose a phase-based recognition system exploiting phase-congruency features for illumination/scale invariance. The performance of the proposed system is tested in terms of miss classifications and false alarms with the help of computer simulation.

  20. Computational intelligence in biomedical imaging

    CERN Document Server

    2014-01-01

    This book provides a comprehensive overview of the state-of-the-art computational intelligence research and technologies in biomedical images with emphasis on biomedical decision making. Biomedical imaging offers useful information on patients’ medical conditions and clues to causes of their symptoms and diseases. Biomedical images, however, provide a large number of images which physicians must interpret. Therefore, computer aids are demanded and become indispensable in physicians’ decision making. This book discusses major technical advancements and research findings in the field of computational intelligence in biomedical imaging, for example, computational intelligence in computer-aided diagnosis for breast cancer, prostate cancer, and brain disease, in lung function analysis, and in radiation therapy. The book examines technologies and studies that have reached the practical level, and those technologies that are becoming available in clinical practices in hospitals rapidly such as computational inte...

  1. Preliminary Design of a Recognition System for Infected Fish Species Using Computer Vision

    OpenAIRE

    Hu, Jing; Li, Daoliang; Duan, Qingling; Chen, Guifen; Si, Xiuli

    2011-01-01

    Part 1: Decision Support Systems, Intelligent Systems and Artificial Intelligence Applications; International audience; For the purpose of classification of fish species, a recognition system was preliminary designed using computer vision. In the first place, pictures were pre-processed by developed programs, dividing into rectangle pieces. Secondly, color and texture features are extracted for those selected texture rectangle fish skin images. Finally, all the images were classified by multi...

  2. Rough-fuzzy pattern recognition applications in bioinformatics and medical imaging

    CERN Document Server

    Maji, Pradipta

    2012-01-01

    Learn how to apply rough-fuzzy computing techniques to solve problems in bioinformatics and medical image processing Emphasizing applications in bioinformatics and medical image processing, this text offers a clear framework that enables readers to take advantage of the latest rough-fuzzy computing techniques to build working pattern recognition models. The authors explain step by step how to integrate rough sets with fuzzy sets in order to best manage the uncertainties in mining large data sets. Chapters are logically organized according to the major phases of pattern recognition systems dev

  3. Computational methods for molecular imaging

    CERN Document Server

    Shi, Kuangyu; Li, Shuo

    2015-01-01

    This volume contains original submissions on the development and application of molecular imaging computing. The editors invited authors to submit high-quality contributions on a wide range of topics including, but not limited to: • Image Synthesis & Reconstruction of Emission Tomography (PET, SPECT) and other Molecular Imaging Modalities • Molecular Imaging Enhancement • Data Analysis of Clinical & Pre-clinical Molecular Imaging • Multi-Modal Image Processing (PET/CT, PET/MR, SPECT/CT, etc.) • Machine Learning and Data Mining in Molecular Imaging. Molecular imaging is an evolving clinical and research discipline enabling the visualization, characterization and quantification of biological processes taking place at the cellular and subcellular levels within intact living subjects. Computational methods play an important role in the development of molecular imaging, from image synthesis to data analysis and from clinical diagnosis to therapy individualization. This work will bring readers fro...

  4. Effects of pose and image resolution on automatic face recognition

    NARCIS (Netherlands)

    Mahmood, Zahid; Ali, Tauseef; Khan, Samee U.

    The popularity of face recognition systems have increased due to their use in widespread applications. Driven by the enormous number of potential application domains, several algorithms have been proposed for face recognition. Face pose and image resolutions are among the two important factors that

  5. Features Speech Signature Image Recognition on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Alexander Mikhailovich Alyushin

    2015-12-01

    Full Text Available The algorithms fordynamic spectrograms images recognition, processing and soundspeech signature (SS weredeveloped. The software for mobile phones, thatcan recognize speech signatureswas prepared. The investigation of the SS recognition speed on its boundarytypes was conducted. Recommendations on the boundary types choice in the optimal ratio of recognitionspeed and required space were given.

  6. IOBSERVER: species recognition via computer vision

    OpenAIRE

    Martín Rodríguez, Fernando; Barral Martínez, Mónica; Besteiro Fernández, Ángel; Vilán Vilán, José Antonio

    2016-01-01

    This paper is about the design of an automated computer vision system that is able to recognize the species of fish individuals that are classified into a fishing vessel and produces a report file with that information. This system is called iObserver and it is a part of project Life-iSEAS (Life program).A very first version of the system has been tested at the oceanographic vessel “Miguel Oliver”. At the time of writing a more advanced prototype is being tested onboard other o...

  7. Human face recognition using eigenface in cloud computing environment

    Science.gov (United States)

    Siregar, S. T. M.; Syahputra, M. F.; Rahmat, R. F.

    2018-02-01

    Doing a face recognition for one single face does not take a long time to process, but if we implement attendance system or security system on companies that have many faces to be recognized, it will take a long time. Cloud computing is a computing service that is done not on a local device, but on an internet connected to a data center infrastructure. The system of cloud computing also provides a scalability solution where cloud computing can increase the resources needed when doing larger data processing. This research is done by applying eigenface while collecting data as training data is also done by using REST concept to provide resource, then server can process the data according to existing stages. After doing research and development of this application, it can be concluded by implementing Eigenface, recognizing face by applying REST concept as endpoint in giving or receiving related information to be used as a resource in doing model formation to do face recognition.

  8. Multiple-output multivariate optical computing for spectrum recognition.

    Science.gov (United States)

    Vornehm, Joseph E; Dong, Ava Jingwen; Boyd, Robert W; Shi, Zhimin

    2014-10-20

    We describe a multivariate optical computer that can implement multiple spectral filters simultaneously. By parallel detection of multiple outputs, our proposed approach is capable of identifying more than two spectra simultaneously, and therefore could significantly speed up spectrum recognition based on optical computing. We demonstrate our approach by recognizing two rare-earth-doped glass samples and a third white light sample spectrum with a fidelity of at least 0.83.

  9. Computer-vision-based car logotype detection and recognition

    OpenAIRE

    Tomažič, Gašper

    2015-01-01

    This thesis addresses the problem of image-based logotype detection and recognition. A new algorithm for logotype detection in images of cars is proposed. In the first stage, the algorithm localizes all maximally-stable extremal regions as candidates of logotype parts. In the next stage, the regions are combined to create logotype candidates, which are encoded by histograms of gradients. A random forest classifier is then used to verify the candidate regions as being logotypes or not and simu...

  10. An Interactive Image Segmentation Method in Hand Gesture Recognition.

    Science.gov (United States)

    Chen, Disi; Li, Gongfa; Sun, Ying; Kong, Jianyi; Jiang, Guozhang; Tang, Heng; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-27

    In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy.

  11. Case-Based Plan Recognition in Computer Games

    OpenAIRE

    Fagan, Michael; Cunningham, Padraig

    2003-01-01

    In this paper we explore the use of case-based plan recognition to predict a player?s actions in a computer game. The game we work with is the classic Space Invaders game and we show that case-based plan recognition can produce good prediction accuracy in real-time, working with a fairly simple game representation. Our evaluation suggests that a personalized plan library will produce better prediction accuracy but, for Space Invaders, good accuracy can be produced using a pl...

  12. Ferrography Wear Particles Image Recognition Based on Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Qiong Li

    2017-01-01

    Full Text Available The morphology of wear particles reflects the complex properties of wear processes involved in particle formation. Typically, the morphology of wear particles is evaluated qualitatively based on microscopy observations. This procedure relies upon the experts’ knowledge and, thus, is not always objective and cheap. With the rapid development of computer image processing technology, neural network based on traditional gradient training algorithm can be used to recognize them. However, the feedforward neural network based on traditional gradient training algorithms for image segmentation creates many issues, such as needing multiple iterations to converge and easy fall into local minimum, which restrict its development heavily. Recently, extreme learning machine (ELM for single-hidden-layer feedforward neural networks (SLFN has been attracting attentions for its faster learning speed and better generalization performance than those of traditional gradient-based learning algorithms. In this paper, we propose to employ ELM for ferrography wear particles image recognition. We extract the shape features, color features, and texture features of five typical kinds of wear particles as the input of the ELM classifier and set five types of wear particles as the output of the ELM classifier. Therefore, the novel ferrography wear particle classifier is founded based on ELM.

  13. Structure recognition from high resolution images of ceramic composites

    Energy Technology Data Exchange (ETDEWEB)

    Ushizima, Daniela; Perciano, Talita; Krishnan, Harinarayan; Loring, Burlen; Bale, Hrishikesh; Parkinson, Dilworth; Sethian, James

    2015-01-05

    Fibers provide exceptional strength-to-weight ratio capabilities when woven into ceramic composites, transforming them into materials with exceptional resistance to high temperature, and high strength combined with improved fracture toughness. Microcracks are inevitable when the material is under strain, which can be imaged using synchrotron X-ray computed micro-tomography (mu-CT) for assessment of material mechanical toughness variation. An important part of this analysis is to recognize fibrillar features. This paper presents algorithms for detecting and quantifying composite cracks and fiber breaks from high-resolution image stacks. First, we propose recognition algorithms to identify the different structures of the composite, including matrix cracks and fibers breaks. Second, we introduce our package F3D for fast filtering of large 3D imagery, implemented in OpenCL to take advantage of graphic cards. Results show that our algorithms automatically identify micro-damage and that the GPU-based implementation introduced here takes minutes, being 17x faster than similar tools on a typical image file.

  14. Algorithms for pattern recognition in images of cell cultures

    Science.gov (United States)

    Mendes, Joyce M.; Peixoto, Nathalia L.; Ramirez-Fernandez, Francisco J.

    2001-06-01

    Several applications of silicon microstructures in areas such as neurobiology and electrophysiology have been stimulating the development of microsystems with the objective of mechanical support to monitor and control several parameters in cell cultures. In this work a multi-microelectrode arrays was fabricated over a glass plate to obtain the growth of neuronal cell monitoring their behavior during cell development. To identify the neuron core and axon an approach for implementation of edge detectors algorithms associated to images is described. The necessity of efficient and reliable algorithms for image processing and interpretation is justified by its large field of applications in several areas as well as medicine, robotics, cellular biology, computational vision and pattern recognition. In this work, it is investigated the adequacy of some edge detectors algorithms such as Canny, Marr-Hildreth. Some alterations in those methods are propose to improve the identification of both cell core and axonal growth measure. We compare the operator to edge detector proposed by Canny, Marr-Hildreth operator and application of Hough Transform. For evaluation of algorithms adaptations, we developed a method for automatic cell segmentation and measurement. Our goal is to find a set of parameters defining the location of the objects to compare the original and processed images.

  15. A framework for forensic face recognition based on recognition performance calibrated for the quality of image pairs

    NARCIS (Netherlands)

    Dutta, A.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan; Meuwly, Didier; Meuwly, Didier

    Recently, it has been shown that performance of a face recognition system depends on the quality of both face images participating in the recognition process: the reference and the test image. In the context of forensic face recognition, this observation has two implications: a) the quality of the

  16. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras

    Science.gov (United States)

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487

  17. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion

    Science.gov (United States)

    Zhao, Yuanshen; Gong, Liang; Huang, Yixiang; Liu, Chengliang

    2016-01-01

    Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the  a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ) color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost. PMID:26840313

  18. Research on Three-dimensional Motion History Image Model and Extreme Learning Machine for Human Body Movement Trajectory Recognition

    Directory of Open Access Journals (Sweden)

    Zheng Chang

    2015-01-01

    Full Text Available Based on the traditional machine vision recognition technology and traditional artificial neural networks about body movement trajectory, this paper finds out the shortcomings of the traditional recognition technology. By combining the invariant moments of the three-dimensional motion history image (computed as the eigenvector of body movements and the extreme learning machine (constructed as the classification artificial neural network of body movements, the paper applies the method to the machine vision of the body movement trajectory. In detail, the paper gives a detailed introduction about the algorithm and realization scheme of the body movement trajectory recognition based on the three-dimensional motion history image and the extreme learning machine. Finally, by comparing with the results of the recognition experiments, it attempts to verify that the method of body movement trajectory recognition technology based on the three-dimensional motion history image and extreme learning machine has a more accurate recognition rate and better robustness.

  19. Accurate three-dimensional pose recognition from monocular images using template matched filtering

    Science.gov (United States)

    Picos, Kenia; Diaz-Ramirez, Victor H.; Kober, Vitaly; Montemayor, Antonio S.; Pantrigo, Juan J.

    2016-06-01

    An accurate algorithm for three-dimensional (3-D) pose recognition of a rigid object is presented. The algorithm is based on adaptive template matched filtering and local search optimization. When a scene image is captured, a bank of correlation filters is constructed to find the best correspondence between the current view of the target in the scene and a target image synthesized by means of computer graphics. The synthetic image is created using a known 3-D model of the target and an iterative procedure based on local search. Computer simulation results obtained with the proposed algorithm in synthetic and real-life scenes are presented and discussed in terms of accuracy of pose recognition in the presence of noise, cluttered background, and occlusion. Experimental results show that our proposal presents high accuracy for 3-D pose estimation using monocular images.

  20. Computational studies of protein-ligand molecular recognition

    OpenAIRE

    Gillies, M.B.

    2001-01-01

    Structure-based drug design is made possible by our understanding of molecular recognition. The utility of this approach was apparent in the development of the clinically e V ective HIV-1 PR inhibitors, where crystal structures of complexes of HIV-1 protease and inhibitors gave pivotal information. Computational methods drawing upon structural data are of increasing relevance to the drug design process. Nonetheless, these methods are quite rudimentary and signicant improvements are needed. Th...

  1. Pose-Invariant Face Recognition via RGB-D Images.

    Science.gov (United States)

    Sang, Gaoli; Li, Jing; Zhao, Qijun

    2016-01-01

    Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions.

  2. Machine Learning: developing an image recognition program : with Python, Scikit Learn and OpenCV

    OpenAIRE

    Nguyen, Minh

    2016-01-01

    Machine Learning is one of the most debated topic in computer world these days, especially after the first Computer Go program has beaten human Go world champion. Among endless application of Machine Learning, image recognition, which problem is processing enormous amount of data from dynamic input. This thesis will present the basic concept of Machine Learning, Machine Learning algorithms, Python programming language and Scikit Learn – a simple and efficient tool for data analysis in P...

  3. Action Recognition in Semi-synthetic Images using Motion Primitives

    DEFF Research Database (Denmark)

    Fihl, Preben; Holte, Michael Boelstoft; Moeslund, Thomas B.

    This technical report describes an action recognition approach based on motion primitives. A few characteristic time instances are found in a sequence containing an action and the action is classified from these instances. The characteristic instances are defined solely on the human motion, hence...... motion primitives. The motion primitives are extracted by double difference images and represented by four features. In each frame the primitive, if any, that best explains the observed data is identified. This leads to a discrete recognition problem since a video sequence will be converted into a string...... achieving a recognition rate of 96.5%....

  4. Syntactic reasoning and pattern recognition for analysis of coronary artery images.

    Science.gov (United States)

    Ogiela, Marek R; Tadeusiewicz, Ryszard

    2002-01-01

    This paper presents a new approach to the application of structural pattern recognition methods for image understanding, based on content analysis and knowledge discovery performed on medical images. This presents in particular computer analysis and recognition of local stenoses of the coronary arteries lumen. These stenoses are the result of the appearance of arteriosclerosis plaques, which in consequence lead to different forms of ischemic cardiovascular diseases. Such diseases may be seen in the form of stable or unstable disturbances of heart rhythm or infarctions. Analysis of the correct morphology of these arteries lumen is possible with the application of the syntactic analysis and pattern recognition methods, in particular with the attributed grammar of LALR type. In the paper, we shall describe all stages of analysis and understanding of images in the context of obtained features, and we shall also present the proper algorithm of syntactic reasoning based on the acquired knowledge.

  5. 3D FACE RECOGNITION FROM RANGE IMAGES BASED ON CURVATURE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Suranjan Ganguly

    2014-02-01

    Full Text Available In this paper, we present a novel approach for three-dimensional face recognition by extracting the curvature maps from range images. There are four types of curvature maps: Gaussian, Mean, Maximum and Minimum curvature maps. These curvature maps are used as a feature for 3D face recognition purpose. The dimension of these feature vectors is reduced using Singular Value Decomposition (SVD technique. Now from calculated three components of SVD, the non-negative values of ‘S’ part of SVD is ranked and used as feature vector. In this proposed method, two pair-wise curvature computations are done. One is Mean, and Maximum curvature pair and another is Gaussian and Mean curvature pair. These are used to compare the result for better recognition rate. This automated 3D face recognition system is focused in different directions like, frontal pose with expression and illumination variation, frontal face along with registered face, only registered face and registered face from different pose orientation across X, Y and Z axes. 3D face images used for this research work are taken from FRAV3D database. The pose variation of 3D facial image is being registered to frontal pose by applying one to all registration technique then curvature mapping is applied on registered face images along with remaining frontal face images. For the classification and recognition purpose five layer feed-forward back propagation neural network classifiers is used, and the corresponding result is discussed in section 4.

  6. Feature coding for image representation and recognition

    CERN Document Server

    Huang, Yongzhen

    2015-01-01

    This brief presents a comprehensive introduction to feature coding, which serves as a key module for the typical object recognition pipeline. The text offers a rich blend of theory and practice while reflects the recent developments on feature coding, covering the following five aspects: (1) Review the state-of-the-art, analyzing the motivations and mathematical representations of various feature coding methods; (2) Explore how various feature coding algorithms evolve along years; (3) Summarize the main characteristics of typical feature coding algorithms and categorize them accordingly; (4) D

  7. TU-FG-209-12: Treatment Site and View Recognition in X-Ray Images with Hierarchical Multiclass Recognition Models

    Energy Technology Data Exchange (ETDEWEB)

    Chang, X; Mazur, T; Yang, D [Washington University in St Louis, St Louis, MO (United States)

    2016-06-15

    Purpose: To investigate an approach of automatically recognizing anatomical sites and imaging views (the orientation of the image acquisition) in 2D X-ray images. Methods: A hierarchical (binary tree) multiclass recognition model was developed to recognize the treatment sites and views in x-ray images. From top to bottom of the tree, the treatment sites are grouped hierarchically from more general to more specific. Each node in the hierarchical model was designed to assign images to one of two categories of anatomical sites. The binary image classification function of each node in the hierarchical model is implemented by using a PCA transformation and a support vector machine (SVM) model. The optimal PCA transformation matrices and SVM models are obtained by learning from a set of sample images. Alternatives of the hierarchical model were developed to support three scenarios of site recognition that may happen in radiotherapy clinics, including two or one X-ray images with or without view information. The performance of the approach was tested with images of 120 patients from six treatment sites – brain, head-neck, breast, lung, abdomen and pelvis – with 20 patients per site and two views (AP and RT) per patient. Results: Given two images in known orthogonal views (AP and RT), the hierarchical model achieved a 99% average F1 score to recognize the six sites. Site specific view recognition models have 100 percent accuracy. The computation time to process a new patient case (preprocessing, site and view recognition) is 0.02 seconds. Conclusion: The proposed hierarchical model of site and view recognition is effective and computationally efficient. It could be useful to automatically and independently confirm the treatment sites and views in daily setup x-ray 2D images. It could also be applied to guide subsequent image processing tasks, e.g. site and view dependent contrast enhancement and image registration. The senior author received research grants from View

  8. Gesture recognition based on computer vision and glove sensor for remote working environments

    Energy Technology Data Exchange (ETDEWEB)

    Chien, Sung Il; Kim, In Chul; Baek, Yung Mok; Kim, Dong Su; Jeong, Jee Won; Shin, Kug [Kyungpook National University, Taegu (Korea)

    1998-04-01

    In this research, we defined a gesture set needed for remote monitoring and control of a manless system in atomic power station environments. Here, we define a command as the loci of a gesture. We aim at the development of an algorithm using a vision sensor and glove sensors in order to implement the gesture recognition system. The gesture recognition system based on computer vision tracks a hand by using cross correlation of PDOE image. To recognize the gesture word, the 8 direction code is employed as the input symbol for discrete HMM. Another gesture recognition based on sensor has introduced Pinch glove and Polhemus sensor as an input device. The extracted feature through preprocessing now acts as an input signal of the recognizer. For recognition 3D loci of Polhemus sensor, discrete HMM is also adopted. The alternative approach of two foregoing recognition systems uses the vision and and glove sensors together. The extracted mesh feature and 8 direction code from the locus tracking are introduced for further enhancing recognition performance. MLP trained by backpropagation is introduced here and its performance is compared to that of discrete HMM. (author). 32 refs., 44 figs., 21 tabs.

  9. A computer aided treatment event recognition system in radiation therapy

    International Nuclear Information System (INIS)

    Xia, Junyi; Mart, Christopher; Bayouth, John

    2014-01-01

    Purpose: To develop an automated system to safeguard radiation therapy treatments by analyzing electronic treatment records and reporting treatment events. Methods: CATERS (Computer Aided Treatment Event Recognition System) was developed to detect treatment events by retrieving and analyzing electronic treatment records. CATERS is designed to make the treatment monitoring process more efficient by automating the search of the electronic record for possible deviations from physician's intention, such as logical inconsistencies as well as aberrant treatment parameters (e.g., beam energy, dose, table position, prescription change, treatment overrides, etc). Over a 5 month period (July 2012–November 2012), physicists were assisted by the CATERS software in conducting normal weekly chart checks with the aims of (a) determining the relative frequency of particular events in the authors’ clinic and (b) incorporating these checks into the CATERS. During this study period, 491 patients were treated at the University of Iowa Hospitals and Clinics for a total of 7692 fractions. Results: All treatment records from the 5 month analysis period were evaluated using all the checks incorporated into CATERS after the training period. About 553 events were detected as being exceptions, although none of them had significant dosimetric impact on patient treatments. These events included every known event type that was discovered during the trial period. A frequency analysis of the events showed that the top three types of detected events were couch position override (3.2%), extra cone beam imaging (1.85%), and significant couch position deviation (1.31%). The significant couch deviation is defined as the number of treatments where couch vertical exceeded two times standard deviation of all couch verticals, or couch lateral/longitudinal exceeded three times standard deviation of all couch laterals and longitudinals. On average, the application takes about 1 s per patient when

  10. Identification of Alfalfa Leaf Diseases Using Image Recognition Technology.

    Directory of Open Access Journals (Sweden)

    Feng Qin

    Full Text Available Common leaf spot (caused by Pseudopeziza medicaginis, rust (caused by Uromyces striatus, Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana and Cercospora leaf spot (caused by Cercospora medicaginis are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis. After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection, disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features was the optimal model. For this SVM model, the

  11. Identification of Alfalfa Leaf Diseases Using Image Recognition Technology

    Science.gov (United States)

    Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang

    2016-01-01

    Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the

  12. Dynamic Image Networks for Action Recognition

    NARCIS (Netherlands)

    Bilen, H.; Fernando, B.; Gavves, E.; Vedaldi, A.; Gould, S.

    2016-01-01

    We introduce the concept of dynamic image, a novel compact representation of videos useful for video analysis especially when convolutional neural networks (CNNs) are used. The dynamic image is based on the rank pooling concept and is obtained through the parameters of a ranking machine that encodes

  13. Correlation-based nonlinear composite filters applied to image recognition

    Science.gov (United States)

    Martínez-Díaz, Saúl

    2010-08-01

    Correlation-based pattern recognition has been an area of extensive research in the past few decades. Recently, composite nonlinear correlation filters invariants to translation, rotation, and scale were proposed. The design of the filters is based on logical operations and nonlinear correlation. In this work nonlinear filters are designed and applied to non-homogeneously illuminated images acquired with an optical microscope. Images are embedded into cluttered background, non-homogeneously illuminated and corrupted by random noise, which makes difficult the recognition task. Performance of nonlinear composite filters is compared with performance of other composite correlation filters, in terms discrimination capability.

  14. Comparison of eye imaging pattern recognition using neural network

    Science.gov (United States)

    Bukhari, W. M.; Syed A., M.; Nasir, M. N. M.; Sulaima, M. F.; Yahaya, M. S.

    2015-05-01

    The beauty of eye recognition system that it is used in automatic identifying and verifies a human weather from digital images or video source. There are various behaviors of the eye such as the color of the iris, size of pupil and shape of the eye. This study represents the analysis, design and implementation of a system for recognition of eye imaging. All the eye images that had been captured from the webcam in RGB format must through several techniques before it can be input for the pattern and recognition processes. The result shows that the final value of weight and bias after complete training 6 eye images for one subject is memorized by the neural network system and be the reference value of the weight and bias for the testing part. The target classifies to 5 different types for 5 subjects. The eye images can recognize the subject based on the target that had been set earlier during the training process. When the values between new eye image and the eye image in the database are almost equal, it is considered the eye image is matched.

  15. Adaptive Deep Supervised Autoencoder Based Image Reconstruction for Face Recognition

    Directory of Open Access Journals (Sweden)

    Rongbing Huang

    2016-01-01

    Full Text Available Based on a special type of denoising autoencoder (DAE and image reconstruction, we present a novel supervised deep learning framework for face recognition (FR. Unlike existing deep autoencoder which is unsupervised face recognition method, the proposed method takes class label information from training samples into account in the deep learning procedure and can automatically discover the underlying nonlinear manifold structures. Specifically, we define an Adaptive Deep Supervised Network Template (ADSNT with the supervised autoencoder which is trained to extract characteristic features from corrupted/clean facial images and reconstruct the corresponding similar facial images. The reconstruction is realized by a so-called “bottleneck” neural network that learns to map face images into a low-dimensional vector and reconstruct the respective corresponding face images from the mapping vectors. Having trained the ADSNT, a new face image can then be recognized by comparing its reconstruction image with individual gallery images, respectively. Extensive experiments on three databases including AR, PubFig, and Extended Yale B demonstrate that the proposed method can significantly improve the accuracy of face recognition under enormous illumination, pose change, and a fraction of occlusion.

  16. Inversion improves the recognition of facial expression in thatcherized images.

    Science.gov (United States)

    Psalta, Lilia; Andrews, Timothy J

    2014-01-01

    The Thatcher illusion provides a compelling example of the face inversion effect. However, the marked effect of inversion in the Thatcher illusion contrasts to other studies that report only a small effect of inversion on the recognition of facial expressions. To address this discrepancy, we compared the effects of inversion and thatcherization on the recognition of facial expressions. We found that inversion of normal faces caused only a small reduction in the recognition of facial expressions. In contrast, local inversion of facial features in upright thatcherized faces resulted in a much larger reduction in the recognition of facial expressions. Paradoxically, inversion of thatcherized faces caused a relative increase in the recognition of facial expressions. Together, these results suggest that different processes explain the effects of inversion on the recognition of facial expressions and on the perception of the Thatcher illusion. The grotesque perception of thatcherized images is based on a more orientation-sensitive representation of the face. In contrast, the recognition of facial expression is dependent on a more orientation-insensitive representation. A similar pattern of results was evident when only the mouth or eye region was visible. These findings demonstrate that a key component of the Thatcher illusion is to be found in orientation-specific encoding of the features of the face.

  17. Multispectral image fusion for illumination-invariant palmprint recognition

    Science.gov (United States)

    Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng

    2017-01-01

    Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied. PMID:28558064

  18. A Novel Approach of Low-Light Image Denoising for Face Recognition

    Directory of Open Access Journals (Sweden)

    Yimei Kang

    2014-04-01

    Full Text Available Illumination variation makes automatic face recognition a challenging task, especially in low light environments. A very simple and efficient novel low-light image denoising of low frequency noise (DeLFN is proposed. The noise frequency distribution of low-light images is presented based on massive experimental results. The low and very low frequency noise are dominant in low light conditions. DeLFN is a three-level image denoising method. The first level denoises mixed noises by histogram equalization (HE to improve overall contrast. The second level denoises low frequency noise by logarithmic transformation (LOG to enhance the image detail. The third level denoises residual very low frequency noise by high-pass filtering to recover more features of the true images. The PCA (Principal Component Analysis recognition method is applied to test recognition rate of the preprocessed face images with DeLFN. DeLFN are compared with several representative illumination preprocessing methods on the Yale Face Database B, the Extended Yale face database B, and the CMU PIE face database, respectively. DeLFN not only outperformed other algorithms in improving visual quality and face recognition rate, but also is simpler and computationally efficient for real time applications.

  19. Foundations of computer vision computational geometry, visual image structures and object shape detection

    CERN Document Server

    Peters, James F

    2017-01-01

    This book introduces the fundamentals of computer vision (CV), with a focus on extracting useful information from digital images and videos. Including a wealth of methods used in detecting and classifying image objects and their shapes, it is the first book to apply a trio of tools (computational geometry, topology and algorithms) in solving CV problems, shape tracking in image object recognition and detecting the repetition of shapes in single images and video frames. Computational geometry provides a visualization of topological structures such as neighborhoods of points embedded in images, while image topology supplies us with structures useful in the analysis and classification of image regions. Algorithms provide a practical, step-by-step means of viewing image structures. The implementations of CV methods in Matlab and Mathematica, classification of chapter problems with the symbols (easily solved) and (challenging) and its extensive glossary of key words, examples and connections with the fabric of C...

  20. Automatic anatomy recognition in whole-body PET/CT images

    International Nuclear Information System (INIS)

    Wang, Huiqian; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Torigian, Drew A.; Zhao, Liming

    2016-01-01

    Purpose: Whole-body positron emission tomography/computed tomography (PET/CT) has become a standard method of imaging patients with various disease conditions, especially cancer. Body-wide accurate quantification of disease burden in PET/CT images is important for characterizing lesions, staging disease, prognosticating patient outcome, planning treatment, and evaluating disease response to therapeutic interventions. However, body-wide anatomy recognition in PET/CT is a critical first step for accurately and automatically quantifying disease body-wide, body-region-wise, and organwise. This latter process, however, has remained a challenge due to the lower quality of the anatomic information portrayed in the CT component of this imaging modality and the paucity of anatomic details in the PET component. In this paper, the authors demonstrate the adaptation of a recently developed automatic anatomy recognition (AAR) methodology [Udupa et al., “Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images,” Med. Image Anal. 18, 752–771 (2014)] to PET/CT images. Their goal was to test what level of object localization accuracy can be achieved on PET/CT compared to that achieved on diagnostic CT images. Methods: The authors advance the AAR approach in this work in three fronts: (i) from body-region-wise treatment in the work of Udupa et al. to whole body; (ii) from the use of image intensity in optimal object recognition in the work of Udupa et al. to intensity plus object-specific texture properties, and (iii) from the intramodality model-building-recognition strategy to the intermodality approach. The whole-body approach allows consideration of relationships among objects in different body regions, which was previously not possible. Consideration of object texture allows generalizing the previous optimal threshold-based fuzzy model recognition method from intensity images to any derived fuzzy membership image, and in the process

  1. Automatic anatomy recognition in whole-body PET/CT images

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Huiqian [College of Optoelectronic Engineering, Chongqing University, Chongqing 400044, China and Medical Image Processing Group Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States); Udupa, Jayaram K., E-mail: jay@mail.med.upenn.edu; Odhner, Dewey; Tong, Yubing; Torigian, Drew A. [Medical Image Processing Group Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States); Zhao, Liming [Medical Image Processing Group Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 and Research Center of Intelligent System and Robotics, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China)

    2016-01-15

    Purpose: Whole-body positron emission tomography/computed tomography (PET/CT) has become a standard method of imaging patients with various disease conditions, especially cancer. Body-wide accurate quantification of disease burden in PET/CT images is important for characterizing lesions, staging disease, prognosticating patient outcome, planning treatment, and evaluating disease response to therapeutic interventions. However, body-wide anatomy recognition in PET/CT is a critical first step for accurately and automatically quantifying disease body-wide, body-region-wise, and organwise. This latter process, however, has remained a challenge due to the lower quality of the anatomic information portrayed in the CT component of this imaging modality and the paucity of anatomic details in the PET component. In this paper, the authors demonstrate the adaptation of a recently developed automatic anatomy recognition (AAR) methodology [Udupa et al., “Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images,” Med. Image Anal. 18, 752–771 (2014)] to PET/CT images. Their goal was to test what level of object localization accuracy can be achieved on PET/CT compared to that achieved on diagnostic CT images. Methods: The authors advance the AAR approach in this work in three fronts: (i) from body-region-wise treatment in the work of Udupa et al. to whole body; (ii) from the use of image intensity in optimal object recognition in the work of Udupa et al. to intensity plus object-specific texture properties, and (iii) from the intramodality model-building-recognition strategy to the intermodality approach. The whole-body approach allows consideration of relationships among objects in different body regions, which was previously not possible. Consideration of object texture allows generalizing the previous optimal threshold-based fuzzy model recognition method from intensity images to any derived fuzzy membership image, and in the process

  2. Searching for Pulsars Using Image Pattern Recognition

    NARCIS (Netherlands)

    Zhu, W.W.; Berndsen, A.; Madsen, E.C.; Tan, M.; Stairs, I.H.; Brazier, A.; Lazarus, P.; Lynch, R.; Scholz, P.; Stovall, K.; Ransom, S.M.; Banaszak, S.; Biwer, C.M.; Cohen, S.; Dartez, L.P.; Flanigan, J.; Lunsford, G.; Martinez, J.G.; Mata, A.; Rohr, M.; Walker, A.; Allen, B.; Bhat, N.D.R.; Bogdanov, S.; Camilo, F.; Chatterjee, S.; Cordes, J.M.; Crawford, F.; Deneva, J.S.; Desvignes, G.; Ferdman, R.D.; Freire, P.C.C.; Hessels, J.W.T.; Jenet, F.A.; Kaplan, D.L.; Kaspi, V.M.; Knispel, B.; Lee, K.J.; van Leeuwen, J.; Lyne, A.G.; McLaughlin, M.A.; Siemens, X.; Spitler, L.G.; Venkataraman, A.

    2014-01-01

    In the modern era of big data, many fields of astronomy are generating huge volumes of data, the analysis of which can sometimes be the limiting factor in research. Fortunately, computer scientists have developed powerful data-mining techniques that can be applied to various fields. In this paper,

  3. Computer controlled evaluation of binary images

    NARCIS (Netherlands)

    Schouten, Th.E.; van den Broek, Egon

    2010-01-01

    The present invention relates to computer controlled image processing and, in particular, to computer controlled evaluation of two dimensional, 2D, and three dimensional, 3D, binary images including sequences of images using a distance map.

  4. Automated Recognition of 3D Features in GPIR Images

    Science.gov (United States)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  5. Extraction of prostatic lumina and automated recognition for prostatic calculus image using PCA-SVM.

    Science.gov (United States)

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi.

  6. Extraction of Prostatic Lumina and Automated Recognition for Prostatic Calculus Image Using PCA-SVM

    Science.gov (United States)

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D. Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi. PMID:21461364

  7. Two dimensional convolute integers for machine vision and image recognition

    Science.gov (United States)

    Edwards, Thomas R.

    1988-01-01

    Machine vision and image recognition require sophisticated image processing prior to the application of Artificial Intelligence. Two Dimensional Convolute Integer Technology is an innovative mathematical approach for addressing machine vision and image recognition. This new technology generates a family of digital operators for addressing optical images and related two dimensional data sets. The operators are regression generated, integer valued, zero phase shifting, convoluting, frequency sensitive, two dimensional low pass, high pass and band pass filters that are mathematically equivalent to surface fitted partial derivatives. These operators are applied non-recursively either as classical convolutions (replacement point values), interstitial point generators (bandwidth broadening or resolution enhancement), or as missing value calculators (compensation for dead array element values). These operators show frequency sensitive feature selection scale invariant properties. Such tasks as boundary/edge enhancement and noise or small size pixel disturbance removal can readily be accomplished. For feature selection tight band pass operators are essential. Results from test cases are given.

  8. Scene recognition and colorization for vehicle infrared images

    Science.gov (United States)

    Hou, Junjie; Sun, Shaoyuan; Shen, Zhenyi; Huang, Zhen; Zhao, Haitao

    2016-10-01

    In order to make better use of infrared technology for driving assistance system, a scene recognition and colorization method is proposed in this paper. Various objects in a queried infrared image are detected and labelled with proper categories by a combination of SIFT-Flow and MRF model. The queried image is then colorized by assigning corresponding colors according to the categories of the objects appeared. The results show that the strategy here emphasizes important information of the IR images for human vision and could be used to broaden the application of IR images for vehicle driving.

  9. EEG source imaging assists decoding in a face recognition task

    DEFF Research Database (Denmark)

    Andersen, Rasmus S.; Eliasen, Anders U.; Pedersen, Nicolai

    2017-01-01

    of face recognition. This task concerns the differentiation of brain responses to images of faces and scrambled faces and poses a rather difficult decoding problem at the single trial level. We implement the pipeline using spatially focused features and show that this approach is challenged and source...

  10. Magnetic resonance imaging pattern recognition in hypomyelinating disorders

    NARCIS (Netherlands)

    Steenweg, M.E.; Vanderver, A.; Blaser, S.; Blizzi, A.; de Koning, T.J.; Mancini, G.M.S.; van Wieringen, W.N.; Barkhof, F.; Wolf, N.I.; van der Knaap, M.S.

    2010-01-01

    Hypomyelination is observed in the context of a growing number of genetic disorders that share clinical characteristics. The aim of this study was to determine the possible role of magnetic resonance imaging pattern recognition in distinguishing different hypomyelinating disorders, which would

  11. International symposium on pattern recognition and acoustical imaging

    International Nuclear Information System (INIS)

    Ferrari, L.A.

    1987-01-01

    This book contains over 50 selections. Some of the titles are: Inverse scattering theory foundations of tomography with diffracting wavefields; Statistical physics of medical ultrasonic images; Pattern recognition in geophysical exploration; and Applications of cluster analysis and unsupervised learning to multivariate tissue characterization

  12. Computational intelligence in multi-feature visual pattern recognition hand posture and face recognition using biologically inspired approaches

    CERN Document Server

    Pisharady, Pramod Kumar; Poh, Loh Ai

    2014-01-01

    This book presents a collection of computational intelligence algorithms that addresses issues in visual pattern recognition such as high computational complexity, abundance of pattern features, sensitivity to size and shape variations and poor performance against complex backgrounds. The book has 3 parts. Part 1 describes various research issues in the field with a survey of the related literature. Part 2 presents computational intelligence based algorithms for feature selection and classification. The algorithms are discriminative and fast. The main application area considered is hand posture recognition. The book also discusses utility of these algorithms in other visual as well as non-visual pattern recognition tasks including face recognition, general object recognition and cancer / tumor classification. Part 3 presents biologically inspired algorithms for feature extraction. The visual cortex model based features discussed have invariance with respect to appearance and size of the hand, and provide good...

  13. Imageability and age of acquisition effects in disyllabic word recognition.

    Science.gov (United States)

    Cortese, Michael J; Schock, Jocelyn

    2013-01-01

    Imageability and age of acquisition (AoA) effects, as well as key interactions between these variables and frequency and consistency, were examined via multiple regression analyses for 1,936 disyllabic words, using reaction time and accuracy measures from the English Lexicon Project. Both imageability and AoA accounted for unique variance in lexical decision and naming reaction time performance. In addition, across both tasks, AoA and imageability effects were larger for low-frequency words than high-frequency words, and imageability effects were larger for later acquired than earlier acquired words. In reading aloud, consistency effects in reaction time were larger for later acquired words than earlier acquired words, but consistency did not interact with imageability in the reaction time analysis. These results provide further evidence that multisyllabic word recognition is similar to monosyllabic word recognition and indicate that AoA and imageability are valid predictors of word recognition performance. In addition, the results indicate that meaning exerts a larger influence in the reading aloud of multisyllabic words than monosyllabic words. Finally, parallel-distributed-processing approaches provide a useful theoretical framework to explain the main effects and interactions.

  14. Terahertz Imaging for Biomedical Applications Pattern Recognition and Tomographic Reconstruction

    CERN Document Server

    Yin, Xiaoxia; Abbott, Derek

    2012-01-01

    Terahertz Imaging for Biomedical Applications: Pattern Recognition and Tomographic Reconstruction presents the necessary algorithms needed to assist screening, diagnosis, and treatment, and these algorithms will play a critical role in the accurate detection of abnormalities present in biomedical imaging. Terahertz biomedical imaging has become an area of interest due to its ability to simultaneously acquire both image and spectral information. Terahertz imaging systems are being commercialized with an increasing number of trials performed in a biomedical setting. Terahertz tomographic imaging and detection technology contributes to the ability to identify opaque objects with clear boundaries,and would be useful to both in vivo and ex vivo environments. This book also: Introduces terahertz radiation techniques and provides a number of topical examples of signal and image processing, as well as machine learning Presents the most recent developments in an emerging field, terahertz radiation Utilizes new methods...

  15. Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition

    Science.gov (United States)

    Downie, John D.; Tucker, Deanne (Technical Monitor)

    1994-01-01

    Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

  16. Retrieval Architecture with Classified Query for Content Based Image Recognition

    Directory of Open Access Journals (Sweden)

    Rik Das

    2016-01-01

    Full Text Available The consumer behavior has been observed to be largely influenced by image data with increasing familiarity of smart phones and World Wide Web. Traditional technique of browsing through product varieties in the Internet with text keywords has been gradually replaced by the easy accessible image data. The importance of image data has portrayed a steady growth in application orientation for business domain with the advent of different image capturing devices and social media. The paper has described a methodology of feature extraction by image binarization technique for enhancing identification and retrieval of information using content based image recognition. The proposed algorithm was tested on two public datasets, namely, Wang dataset and Oliva and Torralba (OT-Scene dataset with 3688 images on the whole. It has outclassed the state-of-the-art techniques in performance measure and has shown statistical significance.

  17. A content-based image retrieval method for optical colonoscopy images based on image recognition techniques

    Science.gov (United States)

    Nosato, Hirokazu; Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro

    2015-03-01

    This paper proposes a content-based image retrieval method for optical colonoscopy images that can find images similar to ones being diagnosed. Optical colonoscopy is a method of direct observation for colons and rectums to diagnose bowel diseases. It is the most common procedure for screening, surveillance and treatment. However, diagnostic accuracy for intractable inflammatory bowel diseases, such as ulcerative colitis (UC), is highly dependent on the experience and knowledge of the medical doctor, because there is considerable variety in the appearances of colonic mucosa within inflammations with UC. In order to solve this issue, this paper proposes a content-based image retrieval method based on image recognition techniques. The proposed retrieval method can find similar images from a database of images diagnosed as UC, and can potentially furnish the medical records associated with the retrieved images to assist the UC diagnosis. Within the proposed method, color histogram features and higher order local auto-correlation (HLAC) features are adopted to represent the color information and geometrical information of optical colonoscopy images, respectively. Moreover, considering various characteristics of UC colonoscopy images, such as vascular patterns and the roughness of the colonic mucosa, we also propose an image enhancement method to highlight the appearances of colonic mucosa in UC. In an experiment using 161 UC images from 32 patients, we demonstrate that our method improves the accuracy of retrieving similar UC images.

  18. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-03-01

    Full Text Available Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT, speed-up robust feature (SURF, local binary patterns (LBP, histogram of oriented gradients (HOG, and weighted HOG. Recently, the convolutional neural network (CNN method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  19. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  20. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  1. Compact hybrid optoelectrical unit for image processing and recognition

    Science.gov (United States)

    Cheng, Gang; Jin, Guofan; Wu, Minxian; Liu, Haisong; He, Qingsheng; Yuan, ShiFu

    1998-07-01

    In this paper a compact opto-electric unit (CHOEU) for digital image processing and recognition is proposed. The central part of CHOEU is an incoherent optical correlator, which is realized with a SHARP QA-1200 8.4 inch active matrix TFT liquid crystal display panel which is used as two real-time spatial light modulators for both the input image and reference template. CHOEU can do two main processing works. One is digital filtering; the other is object matching. Using CHOEU an edge-detection operator is realized to extract the edges from the input images. Then the reprocessed images are sent into the object recognition unit for identifying the important targets. A novel template- matching method is proposed for gray-tome image recognition. A positive and negative cycle-encoding method is introduced to realize the absolute difference measurement pixel- matching on a correlator structure simply. The system has god fault-tolerance ability for rotation distortion, Gaussian noise disturbance or information losing. The experiments are given at the end of this paper.

  2. Research on Forest Flame Recognition Algorithm Based on Image Feature

    Science.gov (United States)

    Wang, Z.; Liu, P.; Cui, T.

    2017-09-01

    In recent years, fire recognition based on image features has become a hotspot in fire monitoring. However, due to the complexity of forest environment, the accuracy of forest fireworks recognition based on image features is low. Based on this, this paper proposes a feature extraction algorithm based on YCrCb color space and K-means clustering. Firstly, the paper prepares and analyzes the color characteristics of a large number of forest fire image samples. Using the K-means clustering algorithm, the forest flame model is obtained by comparing the two commonly used color spaces, and the suspected flame area is discriminated and extracted. The experimental results show that the extraction accuracy of flame area based on YCrCb color model is higher than that of HSI color model, which can be applied in different scene forest fire identification, and it is feasible in practice.

  3. RESEARCH ON FOREST FLAME RECOGNITION ALGORITHM BASED ON IMAGE FEATURE

    Directory of Open Access Journals (Sweden)

    Z. Wang

    2017-09-01

    Full Text Available In recent years, fire recognition based on image features has become a hotspot in fire monitoring. However, due to the complexity of forest environment, the accuracy of forest fireworks recognition based on image features is low. Based on this, this paper proposes a feature extraction algorithm based on YCrCb color space and K-means clustering. Firstly, the paper prepares and analyzes the color characteristics of a large number of forest fire image samples. Using the K-means clustering algorithm, the forest flame model is obtained by comparing the two commonly used color spaces, and the suspected flame area is discriminated and extracted. The experimental results show that the extraction accuracy of flame area based on YCrCb color model is higher than that of HSI color model, which can be applied in different scene forest fire identification, and it is feasible in practice.

  4. Artificial intelligence for networks recognition in remote sensing images

    Science.gov (United States)

    Gilliot, Jean-Marc; Amat, Jean-Louis

    1993-12-01

    We describe here a knowledge-based system, NEXSYS (Nextwork EXtraction SYStem) which was designed for the recognition of communication networks in SPOT satellite images. NEXSYS is a frame-based system and uses a co-operative and distributed structure based on a blackboard architecture. Communication networks in SPOT images are composed of thin linear segments. Segments are extracted using mathematical morphology and a Hough transform. An intermediate image representation composed of geometric primitives is obtained. Then an expert module is able to process the segments at the symbolic level trying to recognize networks.

  5. Multi-Scale Pattern Recognition for Image Classification and Segmentation

    NARCIS (Netherlands)

    Li, Y.

    2013-01-01

    Scale is an important parameter of images. Different objects or image structures (e.g. edges and corners) can appear at different scales and each is meaningful only over a limited range of scales. Multi-scale analysis has been widely used in image processing and computer vision, serving as the basis

  6. Computational multispectral video imaging [Invited

    Science.gov (United States)

    Wang, Peng; Menon, Rajesh

    2018-01-01

    Multi-spectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multi-spectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra, to compute the multi-spectral image. We experimentally demonstrated spectral resolution of 9.6nm within the visible band (430nm to 718nm). We further show that the spatial resolution is enhanced by over 30% compared to the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Furthermore, our camera is able to computationally trade-off spectral resolution against the field of view in software without any change in hardware as long as sufficient sensor pixels are utilized for information encoding. Since no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.

  7. Improving early recognition of malignant melanomas by digital image analysis in dermatoscopy.

    Science.gov (United States)

    Horsch, A; Stolz, W; Neiss, A; Abmayr, W; Pompl, R; Bernklau, A; Bunk, W; Dersch, D R; Glässl, A; Schiffner, R; Morfill, G

    1997-01-01

    The malignant melanoma (MM) is the most dangerous human skin disease. The incidence increased dramatically during the last years. The only chance for the patient is an early recognition and excision of the MM. The best diagnostic method for this is skin surface microscopy or dermatoscopy. Its use, however, requires much expertise. In order to support learning and using the method, a computer-based dermatoscopy workstation is being developed. Among others, new complexity measures are used for the image analysis.

  8. Computer Vision and Image Processing: A Paper Review

    Directory of Open Access Journals (Sweden)

    victor - wiley

    2018-02-01

    Full Text Available Computer vision has been studied from many persective. It expands from raw data recording into techniques and ideas combining digital image processing, pattern recognition, machine learning and computer graphics. The wide usage has attracted many scholars to integrate with many disciplines and fields. This paper provide a survey of the recent technologies and theoretical concept explaining the development of computer vision especially related to image processing using different areas of their field application. Computer vision helps scholars to analyze images and video to obtain necessary information,    understand information on events or descriptions, and scenic pattern. It used method of multi-range application domain with massive data analysis. This paper provides contribution of recent development on reviews related to computer vision, image processing, and their related studies. We categorized the computer vision mainstream into four group e.g., image processing, object recognition, and machine learning. We also provide brief explanation on the up-to-date information about the techniques and their performance.

  9. Automatic Blastomere Recognition from a Single Embryo Image

    Directory of Open Access Journals (Sweden)

    Yun Tian

    2014-01-01

    Full Text Available The number of blastomeres of human day 3 embryos is one of the most important criteria for evaluating embryo viability. However, due to the transparency and overlap of blastomeres, it is a challenge to recognize blastomeres automatically using a single embryo image. This study proposes an approach based on least square curve fitting (LSCF for automatic blastomere recognition from a single image. First, combining edge detection, deletion of multiple connected points, and dilation and erosion, an effective preprocessing method was designed to obtain part of blastomere edges that were singly connected. Next, an automatic recognition method for blastomeres was proposed using least square circle fitting. This algorithm was tested on 381 embryo microscopic images obtained from the eight-cell period, and the results were compared with those provided by experts. Embryos were recognized with a 0 error rate occupancy of 21.59%, and the ratio of embryos in which the false recognition number was less than or equal to 2 was 83.16%. This experiment demonstrated that our method could efficiently and rapidly recognize the number of blastomeres from a single embryo image without the need to reconstruct the three-dimensional model of the blastomeres first; this method is simple and efficient.

  10. Discriminative Block-Diagonal Representation Learning for Image Recognition.

    Science.gov (United States)

    Zhang, Zheng; Xu, Yong; Shao, Ling; Yang, Jian

    2017-07-04

    Existing block-diagonal representation studies mainly focuses on casting block-diagonal regularization on training data, while only little attention is dedicated to concurrently learning both block-diagonal representations of training and test data. In this paper, we propose a discriminative block-diagonal low-rank representation (BDLRR) method for recognition. In particular, the elaborate BDLRR is formulated as a joint optimization problem of shrinking the unfavorable representation from off-block-diagonal elements and strengthening the compact block-diagonal representation under the semisupervised framework of LRR. To this end, we first impose penalty constraints on the negative representation to eliminate the correlation between different classes such that the incoherence criterion of the extra-class representation is boosted. Moreover, a constructed subspace model is developed to enhance the self-expressive power of training samples and further build the representation bridge between the training and test samples, such that the coherence of the learned intraclass representation is consistently heightened. Finally, the resulting optimization problem is solved elegantly by employing an alternative optimization strategy, and a simple recognition algorithm on the learned representation is utilized for final prediction. Extensive experimental results demonstrate that the proposed method achieves superb recognition results on four face image data sets, three character data sets, and the 15 scene multicategories data set. It not only shows superior potential on image recognition but also outperforms the state-of-the-art methods.

  11. Enhanced iris recognition method based on multi-unit iris images

    Science.gov (United States)

    Shin, Kwang Yong; Kim, Yeong Gon; Park, Kang Ryoung

    2013-04-01

    For the purpose of biometric person identification, iris recognition uses the unique characteristics of the patterns of the iris; that is, the eye region between the pupil and the sclera. When obtaining an iris image, the iris's image is frequently rotated because of the user's head roll toward the left or right shoulder. As the rotation of the iris image leads to circular shifting of the iris features, the accuracy of iris recognition is degraded. To solve this problem, conventional iris recognition methods use shifting of the iris feature codes to perform the matching. However, this increases the computational complexity and level of false acceptance error. To solve these problems, we propose a novel iris recognition method based on multi-unit iris images. Our method is novel in the following five ways compared with previous methods. First, to detect both eyes, we use Adaboost and a rapid eye detector (RED) based on the iris shape feature and integral imaging. Both eyes are detected using RED in the approximate candidate region that consists of the binocular region, which is determined by the Adaboost detector. Second, we classify the detected eyes into the left and right eyes, because the iris patterns in the left and right eyes in the same person are different, and they are therefore considered as different classes. We can improve the accuracy of iris recognition using this pre-classification of the left and right eyes. Third, by measuring the angle of head roll using the two center positions of the left and right pupils, detected by two circular edge detectors, we obtain the information of the iris rotation angle. Fourth, in order to reduce the error and processing time of iris recognition, adaptive bit-shifting based on the measured iris rotation angle is used in feature matching. Fifth, the recognition accuracy is enhanced by the score fusion of the left and right irises. Experimental results on the iris open database of low-resolution images showed that the

  12. Proceedings of the Second Annual Symposium on Mathematical Pattern Recognition and Image Analysis Program

    Science.gov (United States)

    Guseman, L. F., Jr. (Principal Investigator)

    1984-01-01

    Several papers addressing image analysis and pattern recognition techniques for satellite imagery are presented. Texture classification, image rectification and registration, spatial parameter estimation, and surface fitting are discussed.

  13. Feature Recognition of Froth Images Based on Energy Distribution Characteristics

    Directory of Open Access Journals (Sweden)

    WU Yanpeng

    2014-09-01

    Full Text Available This paper proposes a determining algorithm for froth image features based on the amplitude spectrum energy statistics by applying Fast Fourier Transformation to analyze the energy distribution of various-sized froth. The proposed algorithm has been used to do a froth feature analysis of the froth images from the alumina flotation processing site, and the results show that the consistency rate reaches 98.1 % and the usability rate 94.2 %; with its good robustness and high efficiency, the algorithm is quite suitable for flotation processing state recognition.

  14. A Development of Hybrid Drug Information System Using Image Recognition

    Directory of Open Access Journals (Sweden)

    HwaMin Lee

    2015-04-01

    Full Text Available In order to prevent drug abuse or misuse cases and avoid over-prescriptions, it is necessary for medicine taker to be provided with detailed information about the medicine. In this paper, we propose a drug information system and develop an application to provide information through drug image recognition using a smartphone. We designed a contents-based drug image search algorithm using the color, shape and imprint of drug. Our convenient application can provide users with detailed information about drugs and prevent drug misuse.

  15. Multiresolution stroke sketch adaptive representation and neural network processing system for gray-level image recognition

    Science.gov (United States)

    Meystel, Alexander M.; Rybak, Ilya A.; Bhasin, Sanjay

    1992-11-01

    This paper describes a method for multiresolutional representation of gray-level images as hierarchial sets of strokes characterizing forms of objects with different degrees of generalization depending on the context of the image. This method transforms the original image into a hierarchical graph which allows for efficient coding in order to store, retrieve, and recognize the image. The method which is described is based upon finding the resolution levels for each image which minimizes the computations required. This becomes possible because of the use of a special image representation technique called Multiresolutional Attentional Representation for Recognition, based upon a feature which the authors call a stroke. This feature turns out to be efficient in the process of finding the appropriate system of resolutions and construction of the relational graph. Multiresolutional Attentional Representation for Recognition (MARR) is formed by a multi-layer neural network with recurrent inhibitory connections between neurons, the receptive fields of which are selectively tuned to detect the orientation of local contrasts in parts of the image with appropriate degree of generalization. This method simulates the 'coarse-to-fine' algorithm which an artist usually uses, making at attentional sketch of real images. The method, algorithms, and neural network architecture in this system can be used in many machine-vision systems with AI properties; in particular, robotic vision. We expect that systems with MARR can become a component of intelligent control systems for autonomous robots. Their architectures are mostly multiresolutional and match well with the multiple resolutions of the MARR structure.

  16. A history of computers and computerized imaging.

    Science.gov (United States)

    Mixdorf, M A; Goldsworthy, R E

    1996-01-01

    The computers has revolutionized diagnostic imaging, making possible techniques such as computed tomography, magnetic resonance imaging, sonography and computed radiography. This article traces the historical development of computers and demonstrates how their brief association with the radiologic sciences has transformed diagnostic medicine.

  17. THE COMPARISON OF ALGORITHMS OF RECOGNITION OF IMAGES HOPFILD’S NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Anna Illarionovna Pavlova

    2016-05-01

    Full Text Available The main advantage of artificial neural networks (ANN in recognition of the cottages, is in their functioning like a human brain. The paper deals with image recognition neuron Hopfield’s networks, a comparative analysis of the recognition images by a projection’s method and the Hebb’s rule. For these purposes, was developed program with C# in Microsoft Visual Studio 2012. In this article to recognition for images with different levels of distortion were used. The analysis of results of recognition of images has shown that the method of projections allows to restore strongly distorted images (level of distortions up to 25–30 percent

  18. Mathematics and computer science in medical imaging

    International Nuclear Information System (INIS)

    Viergever, M.A.; Todd-Pokroper, A.E.

    1987-01-01

    The book is divided into two parts. Part 1 gives an introduction to and an overview of the field in ten tutorial chapters. Part 2 contains a selection of invited and proffered papers reporting on current research. Subjects covered in depth are: analytical image reconstruction, regularization, iterative methods, image structure, 3-D display, compression, architectures for image processing, statistical pattern recognition, and expert systems in medical imaging

  19. Basic research planning in mathematical pattern recognition and image analysis

    Science.gov (United States)

    Bryant, J.; Guseman, L. F., Jr.

    1981-01-01

    Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis.

  20. Image processing and pattern recognition with CVIPtools MATLAB toolbox: automatic creation of masks for veterinary thermographic images

    Science.gov (United States)

    Mishra, Deependra K.; Umbaugh, Scott E.; Lama, Norsang; Dahal, Rohini; Marino, Dominic J.; Sackman, Joseph

    2016-09-01

    CVIPtools is a software package for the exploration of computer vision and image processing developed in the Computer Vision and Image Processing Laboratory at Southern Illinois University Edwardsville. CVIPtools is available in three variants - a) CVIPtools Graphical User Interface, b) CVIPtools C library and c) CVIPtools MATLAB toolbox, which makes it accessible to a variety of different users. It offers students, faculty, researchers and any user a free and easy way to explore computer vision and image processing techniques. Many functions have been implemented and are updated on a regular basis, the library has reached a level of sophistication that makes it suitable for both educational and research purposes. In this paper, the detail list of the functions available in the CVIPtools MATLAB toolbox are presented and how these functions can be used in image analysis and computer vision applications. The CVIPtools MATLAB toolbox allows the user to gain practical experience to better understand underlying theoretical problems in image processing and pattern recognition. As an example application, the algorithm for the automatic creation of masks for veterinary thermographic images is presented.

  1. Artificial neural network for bubbles pattern recognition on the images

    Science.gov (United States)

    Poletaev, I. E.; Pervunin, K. S.; Tokarev, M. P.

    2016-10-01

    Two-phase bubble flows have been used in many technological and energy processes as processing oil, chemical and nuclear reactors. This explains large interest to experimental and numerical studies of such flows last several decades. Exploiting of optical diagnostics for analysis of the bubble flows allows researchers obtaining of instantaneous velocity fields and gaseous phase distribution with the high spatial resolution non-intrusively. Behavior of light rays exhibits an intricate manner when they cross interphase boundaries of gaseous bubbles hence the identification of the bubbles images is a complicated problem. This work presents a method of bubbles images identification based on a modern technology of deep learning called convolutional neural networks (CNN). Neural networks are able to determine overlapping, blurred, and non-spherical bubble images. They can increase accuracy of the bubble image recognition, reduce the number of outliers, lower data processing time, and significantly decrease the number of settings for the identification in comparison with standard recognition methods developed before. In addition, usage of GPUs speeds up the learning process of CNN owning to the modern adaptive subgradient optimization techniques.

  2. Impact of multi-focused images on recognition of soft biometric traits

    Science.gov (United States)

    Chiesa, V.; Dugelay, J. L.

    2016-09-01

    In video surveillance semantic traits estimation as gender and age has always been debated topic because of the uncontrolled environment: while light or pose variations have been largely studied, defocused images are still rarely investigated. Recently the emergence of new technologies, as plenoptic cameras, yields to deal with these problems analyzing multi-focus images. Thanks to a microlens array arranged between the sensor and the main lens, light field cameras are able to record not only the RGB values but also the information related to the direction of light rays: the additional data make possible rendering the image with different focal plane after the acquisition. For our experiments, we use the GUC Light Field Face Database that includes pictures from the First Generation Lytro camera. Taking advantage of light field images, we explore the influence of defocusing on gender recognition and age estimation problems. Evaluations are computed on up-to-date and competitive technologies based on deep learning algorithms. After studying the relationship between focus and gender recognition and focus and age estimation, we compare the results obtained by images defocused by Lytro software with images blurred by more standard filters in order to explore the difference between defocusing and blurring effects. In addition we investigate the impact of deblurring on defocused images with the goal to better understand the different impacts of defocusing and standard blurring on gender and age estimation.

  3. Iris recognition using image moments and k-means algorithm.

    Science.gov (United States)

    Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed

    2014-01-01

    This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%.

  4. Iris Recognition Using Image Moments and k-Means Algorithm

    Directory of Open Access Journals (Sweden)

    Yaser Daanial Khan

    2014-01-01

    Full Text Available This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%.

  5. Image processing with personal computer

    International Nuclear Information System (INIS)

    Hara, Hiroshi; Handa, Madoka; Watanabe, Yoshihiko

    1990-01-01

    The method of automating the judgement works using photographs in radiation nondestructive inspection with a simple type image processor on the market was examined. The software for defect extraction and making binary and the software for automatic judgement were made for trial, and by using the various photographs on which the judgement was already done as the object, the accuracy and the problematic points were tested. According to the state of the objects to be photographed and the condition of inspection, the accuracy of judgement from 100% to 45% was obtained. The criteria for judgement were in conformity with the collection of reference photographs made by Japan Cast Steel Association. In the non-destructive inspection by radiography, the number and size of the defect images in photographs are visually judged, the results are collated with the standard, and the quality is decided. Recently, the technology of image processing with personal computers advanced, therefore by utilizing this technology, the automation of the judgement of photographs was attempted to improve the accuracy, to increase the inspection efficiency and to realize labor saving. (K.I.)

  6. Pollen Image Recognition Based on DGDB-LBP Descriptor

    Science.gov (United States)

    Han, L. P.; Xie, Y. H.

    2018-01-01

    In this paper, we propose DGDB-LBP, a local binary pattern descriptor based on the pixel blocks in the dominant gradient direction. Differing from traditional LBP and its variants, DGDB-LBP encodes by comparing the main gradient magnitude of each block rather than the single pixel value or the average of pixel blocks, in doing so, it reduces the influence of noise on pollen images and eliminates redundant and non-informative features. In order to fully describe the texture features of pollen images and analyze it under multi-scales, we propose a new sampling strategy, which uses three types of operators to extract the radial, angular and multiple texture features under different scales. Considering that the pollen images have some degree of rotation under the microscope, we propose the adaptive encoding direction, which is determined by the texture distribution of local region. Experimental results on the Pollenmonitor dataset show that the average correct recognition rate of our method is superior to other pollen recognition methods in recent years.

  7. Computational methods in molecular imaging technologies

    CERN Document Server

    Gunjan, Vinit Kumar; Venkatesh, C; Amarnath, M

    2017-01-01

    This book highlights the experimental investigations that have been carried out on magnetic resonance imaging and computed tomography (MRI & CT) images using state-of-the-art Computational Image processing techniques, and tabulates the statistical values wherever necessary. In a very simple and straightforward way, it explains how image processing methods are used to improve the quality of medical images and facilitate analysis. It offers a valuable resource for researchers, engineers, medical doctors and bioinformatics experts alike.

  8. Filter and Filter Bank Design for Image Texture Recognition

    Energy Technology Data Exchange (ETDEWEB)

    Randen, Trygve

    1997-12-31

    The relevance of this thesis to energy and environment lies in its application to remote sensing such as for instance sea floor mapping and seismic pattern recognition. The focus is on the design of two-dimensional filters for feature extraction, segmentation, and classification of digital images with textural content. The features are extracted by filtering with a linear filter and estimating the local energy in the filter response. The thesis gives a review covering broadly most previous approaches to texture feature extraction and continues with proposals of some new techniques. 143 refs., 59 figs., 7 tabs.

  9. Jet-images: computer vision inspired techniques for jet tagging

    International Nuclear Information System (INIS)

    Cogan, Josh; Kagan, Michael; Strauss, Emanuel; Schwarztman, Ariel

    2015-01-01

    We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon-initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.

  10. Jet-images: computer vision inspired techniques for jet tagging

    Energy Technology Data Exchange (ETDEWEB)

    Cogan, Josh; Kagan, Michael; Strauss, Emanuel; Schwarztman, Ariel [SLAC National Accelerator Laboratory,Menlo Park, CA 94028 (United States)

    2015-02-18

    We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon-initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.

  11. MR imaging and computer vision

    International Nuclear Information System (INIS)

    Gerig, G.; Kikinis, R.; Kuoni, W.

    1989-01-01

    To parallel the rapid progress in MR data acquisition, the authors have developed advanced computer vision methods specifically adapted to the multidimensional and multispectral (T1- and T2-weighted) nature of MR data to extract, analyze, and visualize the morphologic properties of biologic tissues. A multistage image processing scheme is proposed, which performs the three-dimensional (3D) segmentation of the brain (gray and white matter) and ventricular system from two-echo MR volume data with only minimal user interaction. The quality of the segmentation demonstrates the high potential of MR acquisition along with 3D segmentation and 3D visualization for diagnosis, preoperative planning, and research. With segmentation, a fully quantitative 3D exploration is accessible

  12. Edge Detection and Shape Recognition in Neutron Transmission Images

    International Nuclear Information System (INIS)

    Sword, Eric D.; McConchie, Seth M.

    2012-01-01

    Neutron transmission measurements are a valuable tool for nondestructively imaging special nuclear materials. Analysis of these images, however, tends to require significant user interaction to determine the sizes, shapes, and likely compositions of measured objects. Computer vision (CV) techniques can be a useful approach to automatically extracting important information from either neutron transmission images or fission-site-mapping images. An automatable approach has been developed that processes an input image and, through recursive application of CV techniques, produces a set of basic shapes that define surfaces observed in the image. These shapes can then be compared to a library of known shape configurations to determine if the measured object matches its expected configuration, as could be done behind an information barrier for arms control treaty verification inspections.

  13. Weighted Local Active Pixel Pattern (WLAPP for Face Recognition in Parallel Computation Environment

    Directory of Open Access Journals (Sweden)

    Gundavarapu Mallikarjuna Rao

    2013-10-01

    Full Text Available Abstract  - The availability of multi-core technology resulted totally new computational era. Researchers are keen to explore available potential in state of art-machines for breaking the bearer imposed by serial computation. Face Recognition is one of the challenging applications on so ever computational environment. The main difficulty of traditional Face Recognition algorithms is lack of the scalability. In this paper Weighted Local Active Pixel Pattern (WLAPP, a new scalable Face Recognition Algorithm suitable for parallel environment is proposed.  Local Active Pixel Pattern (LAPP is found to be simple and computational inexpensive compare to Local Binary Patterns (LBP. WLAPP is developed based on concept of LAPP. The experimentation is performed on FG-Net Aging Database with deliberately introduced 20% distortion and the results are encouraging. Keywords — Active pixels, Face Recognition, Local Binary Pattern (LBP, Local Active Pixel Pattern (LAPP, Pattern computing, parallel workers, template, weight computation.  

  14. Invariant recognition of polychromatic images of Vibrio cholerae 01

    Science.gov (United States)

    Alvarez-Borrego, Josue; Mourino-Perez, Rosa R.; Cristobal, Gabriel; Pech-Pacheco, Jose L.

    2002-04-01

    Cholera is an acute intestinal infectious disease. It has claimed many lives throughout history, and it continues to be a global health threat. Cholera is considered one of the most important emergence diseases due its relation with global climate changes. Automated methods such as optical systems represent a new trend to make more accurate measurements of the presence and quantity of this microorganism in its natural environment. Automatic systems eliminate observer bias and reduce the analysis time. We evaluate the utility of coherent optical systems with invariant correlation for the recognition of Vibrio cholerae O1. Images of scenes are recorded with a CCD camera and decomposed in three RGB channels. A numeric simulation is developed to identify the bacteria in the different samples through an invariant correlation technique. There is no variation when we repeat the correlation and the variation between images correlation is minimum. The position-, scale-, and rotation-invariant recognition is made with a scale transform through the Mellin transform. The algorithm to recognize Vibrio cholerae O1 is the presence of correlation peaks in the green channel output and their absence in red and blue channels. The discrimination criterion is the presence of correlation peaks in red, green, and blue channels.

  15. Fuzzy synthesis evaluation for image target recognition performance

    Science.gov (United States)

    Zhang, Yong; Wu, TaiBin

    2009-11-01

    With the rapid development of optoelectronic tracking and measurement technology, the testing and evaluation for the tracking system and its inner algorithms are urgently demanded. Automobile target recognition(ATR) technology for the image is a key part of the tracking system based on the image, and develops advanced and fast, which makes the performance evaluation difficult and complex. There is not a reliable and effective evaluation method adaptable to the developing technology. Therefore, a fuzzy synthesis evaluation method for ATR system or its detection and recognition algorithms group was proposed. The evaluation indexes were selected and designed, which weights were calculated by the direct method, W-road method and the change weight method. The simulation testing conditions, the size and the hypothesis test methods of the statistic swatch were discussed. The mean, the covariance dependency and the distribution indexes of the probability of detection(Pd) were effectively tested. The statistic ranges corresponding to evaluation ranks of these indexes were established. Finally, the simple model and the division model of the fuzzy synthesis evaluation algorithm were discussed. Tests show, this method is valuable for getting the occurrence probabilities of the different performance racks of the system or the algorithms group corresponding to vary environment levels.

  16. Hand gesture recognition system based in computer vision and machine learning

    OpenAIRE

    Trigueiros, Paulo; Ribeiro, António Fernando; Reis, L. P.

    2015-01-01

    "Lecture notes in computational vision and biomechanics series, ISSN 2212-9391, vol. 19" Hand gesture recognition is a natural way of human computer interaction and an area of very active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research applied to Hum...

  17. Person recognition using fingerprints and top-view finger images

    Directory of Open Access Journals (Sweden)

    Panyayot Chaikan

    2010-03-01

    Full Text Available Our multimodal biometric system combines fingerprinting with a top-view finger image captured by a CCD camera without user intervention. The greyscale image is preprocessed to enhance its edges, skin furrows, and the nail shape before being manipulated by a bank of oriented filters. A square tessellation is applied to the filtered image to create a feature map, called a NailCode, which is employed in Euclidean distance computations. The NailCode reduces system errors by 17.68% in the verification mode, and by 6.82% in the identification mode.

  18. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  19. Adaptive Computed Tomography Imaging Spectrometer Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The present proposal describes the development of an adaptive Computed Tomography Imaging Spectrometer (CTIS), or "Snapshot" spectrometer which can "instantaneously"...

  20. Biological object recognition in μ-radiography images

    Science.gov (United States)

    Prochazka, A.; Dammer, J.; Weyda, F.; Sopko, V.; Benes, J.; Zeman, J.; Jandejsek, I.

    2015-03-01

    This study presents an applicability of real-time microradiography to biological objects, namely to horse chestnut leafminer, Cameraria ohridella (Insecta: Lepidoptera, Gracillariidae) and following image processing focusing on image segmentation and object recognition. The microradiography of insects (such as horse chestnut leafminer) provides a non-invasive imaging that leaves the organisms alive. The imaging requires a high spatial resolution (micrometer scale) radiographic system. Our radiographic system consists of a micro-focus X-ray tube and two types of detectors. The first is a charge integrating detector (Hamamatsu flat panel), the second is a pixel semiconductor detector (Medipix2 detector). The latter allows detection of single quantum photon of ionizing radiation. We obtained numerous horse chestnuts leafminer pupae in several microradiography images easy recognizable in automatic mode using the image processing methods. We implemented an algorithm that is able to count a number of dead and alive pupae in images. The algorithm was based on two methods: 1) noise reduction using mathematical morphology filters, 2) Canny edge detection. The accuracy of the algorithm is higher for the Medipix2 (average recall for detection of alive pupae =0.99, average recall for detection of dead pupae =0.83), than for the flat panel (average recall for detection of alive pupae =0.99, average recall for detection of dead pupae =0.77). Therefore, we conclude that Medipix2 has lower noise and better displays contours (edges) of biological objects. Our method allows automatic selection and calculation of dead and alive chestnut leafminer pupae. It leads to faster monitoring of the population of one of the world's important insect pest.

  1. Biological object recognition in μ-radiography images

    International Nuclear Information System (INIS)

    Prochazka, A.; Dammer, J.; Benes, J.; Zeman, J.; Weyda, F.; Sopko, V.; Jandejsek, I.

    2015-01-01

    This study presents an applicability of real-time microradiography to biological objects, namely to horse chestnut leafminer, Cameraria ohridella (Insecta: Lepidoptera, Gracillariidae) and following image processing focusing on image segmentation and object recognition. The microradiography of insects (such as horse chestnut leafminer) provides a non-invasive imaging that leaves the organisms alive. The imaging requires a high spatial resolution (micrometer scale) radiographic system. Our radiographic system consists of a micro-focus X-ray tube and two types of detectors. The first is a charge integrating detector (Hamamatsu flat panel), the second is a pixel semiconductor detector (Medipix2 detector). The latter allows detection of single quantum photon of ionizing radiation. We obtained numerous horse chestnuts leafminer pupae in several microradiography images easy recognizable in automatic mode using the image processing methods. We implemented an algorithm that is able to count a number of dead and alive pupae in images. The algorithm was based on two methods: 1) noise reduction using mathematical morphology filters, 2) Canny edge detection. The accuracy of the algorithm is higher for the Medipix2 (average recall for detection of alive pupae =0.99, average recall for detection of dead pupae =0.83), than for the flat panel (average recall for detection of alive pupae =0.99, average recall for detection of dead pupae =0.77). Therefore, we conclude that Medipix2 has lower noise and better displays contours (edges) of biological objects. Our method allows automatic selection and calculation of dead and alive chestnut leafminer pupae. It leads to faster monitoring of the population of one of the world's important insect pest

  2. Target Matching Recognition for Satellite Images Based on the Improved FREAK Algorithm

    Directory of Open Access Journals (Sweden)

    Yantong Chen

    2016-01-01

    Full Text Available Satellite remote sensing image target matching recognition exhibits poor robustness and accuracy because of the unfit feature extractor and large data quantity. To address this problem, we propose a new feature extraction algorithm for fast target matching recognition that comprises an improved feature from accelerated segment test (FAST feature detector and a binary fast retina key point (FREAK feature descriptor. To improve robustness, we extend the FAST feature detector by applying scale space theory and then transform the feature vector acquired by the FREAK descriptor from decimal into binary. We reduce the quantity of data in the computer and improve matching accuracy by using the binary space. Simulation test results show that our algorithm outperforms other relevant methods in terms of robustness and accuracy.

  3. A Feature-Based Structural Measure: An Image Similarity Measure for Face Recognition

    Directory of Open Access Journals (Sweden)

    Noor Abdalrazak Shnain

    2017-08-01

    Full Text Available Facial recognition is one of the most challenging and interesting problems within the field of computer vision and pattern recognition. During the last few years, it has gained special attention due to its importance in relation to current issues such as security, surveillance systems and forensics analysis. Despite this high level of attention to facial recognition, the success is still limited by certain conditions; there is no method which gives reliable results in all situations. In this paper, we propose an efficient similarity index that resolves the shortcomings of the existing measures of feature and structural similarity. This measure, called the Feature-Based Structural Measure (FSM, combines the best features of the well-known SSIM (structural similarity index measure and FSIM (feature similarity index measure approaches, striking a balance between performance for similar and dissimilar images of human faces. In addition to the statistical structural properties provided by SSIM, edge detection is incorporated in FSM as a distinctive structural feature. Its performance is tested for a wide range of PSNR (peak signal-to-noise ratio, using ORL (Olivetti Research Laboratory, now AT&T Laboratory Cambridge and FEI (Faculty of Industrial Engineering, São Bernardo do Campo, São Paulo, Brazil databases. The proposed measure is tested under conditions of Gaussian noise; simulation results show that the proposed FSM outperforms the well-known SSIM and FSIM approaches in its efficiency of similarity detection and recognition of human faces.

  4. The Chinese Facial Emotion Recognition Database (CFERD): a computer-generated 3-D paradigm to measure the recognition of facial emotional expressions at different intensities.

    Science.gov (United States)

    Huang, Charles Lung-Cheng; Hsiao, Sigmund; Hwu, Hai-Gwo; Howng, Shen-Long

    2012-12-30

    The Chinese Facial Emotion Recognition Database (CFERD), a computer-generated three-dimensional (3D) paradigm, was developed to measure the recognition of facial emotional expressions at different intensities. The stimuli consisted of 3D colour photographic images of six basic facial emotional expressions (happiness, sadness, disgust, fear, anger and surprise) and neutral faces of the Chinese. The purpose of the present study is to describe the development and validation of CFERD with nonclinical healthy participants (N=100; 50 men; age ranging between 18 and 50 years), and to generate normative data set. The results showed that the sensitivity index d' [d'=Z(hit rate)-Z(false alarm rate), where function Z(p), p∈[0,1

  5. Design of correlation filters for pattern recognition with disjoint reference image

    Science.gov (United States)

    Aguilar-González, Pablo Mario; Kober, Vitaly

    2011-11-01

    Correlation filters for pattern recognition are commonly designed under the assumption that the shape and appearance of an object of interest are explicitly known. In this paper, we consider a signal model in which an object of interest is given at unknown coordinates in a cluttered reference image and corrupted by additive noise. The reference image is used to design filters for detecting a target in scenes with a nonoverlapping background and additive noise. An optimum correlation filter with respect to peak-to-output energy for object detection is derived. The shape and appearance of the target are estimated from the reference image. Two methods to estimate the frequency response of the derived filter are used. Computer simulation results obtained with the proposed filters are presented and discussed. The performance of the filters is evaluated in terms of discrimination capability and location accuracy for different statistics of the backgrounds and noise processes present in the signal model.

  6. Interpretation of computed tomographic images

    International Nuclear Information System (INIS)

    Stickle, R.L.; Hathcock, J.T.

    1993-01-01

    This article discusses the production of optimal CT images in small animal patients as well as principles of radiographic interpretation. Technical factors affecting image quality and aiding image interpretation are included. Specific considerations for scanning various anatomic areas are given, including indications and potential pitfalls. Principles of radiographic interpretation are discussed. Selected patient images are illustrated

  7. Melanoma recognition framework based on expert definition of ABCD for dermoscopic images.

    Science.gov (United States)

    Abbas, Qaisar; Emre Celebi, M; Garcia, Irene Fondón; Ahmad, Waqar

    2013-02-01

    Melanoma Recognition based on clinical ABCD rule is widely used for clinical diagnosis of pigmented skin lesions in dermoscopy images. However, the current computer-aided diagnostic (CAD) systems for classification between malignant and nevus lesions using the ABCD criteria are imperfect due to use of ineffective computerized techniques. In this study, a novel melanoma recognition system (MRS) is presented by focusing more on extracting features from the lesions using ABCD criteria. The complete MRS system consists of the following six major steps: transformation to the CIEL*a*b* color space, preprocessing to enhance the tumor region, black-frame and hair artifacts removal, tumor-area segmentation, quantification of feature using ABCD criteria and normalization, and finally feature selection and classification. The MRS system for melanoma-nevus lesions is tested on a total of 120 dermoscopic images. To test the performance of the MRS diagnostic classifier, the area under the receiver operating characteristics curve (AUC) is utilized. The proposed classifier achieved a sensitivity of 88.2%, specificity of 91.3%, and AUC of 0.880. The experimental results show that the proposed MRS system can accurately distinguish between malignant and benign lesions. The MRS technique is fully automatic and can easily integrate to an existing CAD system. To increase the classification accuracy of MRS, the CASH pattern recognition technique, visual inspection of dermatologist, contextual information from the patients, and the histopathological tests can be included to investigate the impact with this system. © 2012 John Wiley & Sons A/S.

  8. Pattern recognition for cache management in distributed medical imaging environments.

    Science.gov (United States)

    Viana-Ferreira, Carlos; Ribeiro, Luís; Matos, Sérgio; Costa, Carlos

    2016-02-01

    Traditionally, medical imaging repositories have been supported by indoor infrastructures with huge operational costs. This paradigm is changing thanks to cloud outsourcing which not only brings technological advantages but also facilitates inter-institutional workflows. However, communication latency is one main problem in this kind of approaches, since we are dealing with tremendous volumes of data. To minimize the impact of this issue, cache and prefetching are commonly used. The effectiveness of these mechanisms is highly dependent on their capability of accurately selecting the objects that will be needed soon. This paper describes a pattern recognition system based on artificial neural networks with incremental learning to evaluate, from a set of usage pattern, which one fits the user behavior at a given time. The accuracy of the pattern recognition model in distinct training conditions was also evaluated. The solution was tested with a real-world dataset and a synthesized dataset, showing that incremental learning is advantageous. Even with very immature initial models, trained with just 1 week of data samples, the overall accuracy was very similar to the value obtained when using 75% of the long-term data for training the models. Preliminary results demonstrate an effective reduction in communication latency when using the proposed solution to feed a prefetching mechanism. The proposed approach is very interesting for cache replacement and prefetching policies due to the good results obtained since the first deployment moments.

  9. Conclusiveness of natural languages and recognition of images

    Energy Technology Data Exchange (ETDEWEB)

    Wojcik, Z.M.

    1983-01-01

    The conclusiveness is investigated using recognition processes and one-one correspondence between expressions of a natural language and graphs representing events. The graphs, as conceived in psycholinguistics, are obtained as a result of perception processes. It is possible to generate and process the graphs automatically, using computers and then to convert the resulting graphs into expressions of a natural language. Correctness and conclusiveness of the graphs and sentences are investigated using the fundamental condition for events representation processes. Some consequences of the conclusiveness are discussed, e.g. undecidability of arithmetic, human brain assymetry, correctness of statistical calculations and operations research. It is suggested that the group theory should be imposed on mathematical models of any real system. Proof of the fundamental condition is also presented. 14 references.

  10. Automated Recognition of Geologically Significant Shapes in MER PANCAM and MI Images

    Science.gov (United States)

    Morris, Robert; Shipman, Mark; Roush, Ted L.

    2004-01-01

    Autonomous recognition of scientifically important information provides the capability of: 1) Prioritizing data return; 2) Intelligent data compression; 3) Reactive behavior onboard robotic vehicles. Such capabilities are desirable as mission scenarios include longer durations with decreasing interaction from mission control. To address such issues, we have implemented several computer algorithms, intended to autonomously recognize morphological shapes of scientific interest within a software architecture envisioned for future rover missions. Mars Exploration Rovers (MER) instrument payloads include a Panoramic Camera (PANCAM) and Microscopic Imager (MI). These provide a unique opportunity to evaluate our algorithms when applied to data obtained from the surface of Mars. Early in the mission we applied our algorithms to images available at the mission web site (http://marsrovers.jpl.nasa.gov/gallery/images.html), even though these are not at full resolution. Some algorithms would normally use ancillary information, e.g. camera pointing and position of the sun, but these data were not readily available. The initial results of applying our algorithms to the PANCAM and MI images are encouraging. The horizon is recognized in all images containing it; such information could be used to eliminate unwanted areas from the image prior to data transmission to Earth. Additionally, several rocks were identified that represent targets for the mini-thermal emission spectrometer. Our algorithms also recognize the layers, identified by mission scientists. Such information could be used to prioritize data return or in a decision-making process regarding future rover activities. The spherules seen in MI images were also autonomously recognized. Our results indicate that reliable recognition of scientifically relevant morphologies in images is feasible.

  11. Traffic Sign Recognition System based on Cambridge Correlator Image Comparator

    Directory of Open Access Journals (Sweden)

    J. Turan

    2012-06-01

    Full Text Available Paper presents basic information about application of Optical Correlator (OC, specifically Cambridge Correlator, in system to recognize of traffic sign. Traffic Sign Recognition System consists of three main blocks, Preprocessing, Optical Correlator and Traffic Sign Identification. The Region of Interest (ROI is defined and chosen in preprocessing block and then goes to Optical Correlator, where is compared with database of Traffic Sign. Output of Optical Correlation is correlation plane, which consist of highly localized intensities, know as correlation peaks. The intensity of spots provides a measure of similarity and position of spots, how images (traffic signs are relatively aligned in the input scene. Several experiments have been done with proposed system and results and conclusion are discussed.

  12. Comparison of reconstruction methods for computed tomography with industrial robots using automatic object position recognition

    International Nuclear Information System (INIS)

    Klein, Philipp; Herold, Frank

    2016-01-01

    The Computed Tomography (CT) is one main imaging technique in the field of non-destructive testing. Newly, industrial robots are used to manipulate the object during the whole CT scan, instead of just placing the object onto a standard turntable as it was usual for industrial CT the times before. Using industrial robots for the object manipulation in CT systems provides an increase in spatial freedom and therefore more flexibility for various applications. For example complete CT trajectories concerning the Tuy-Smith Theorem are applied more easily than using conventional manipulators. These advantages are accompanied by a loss of precision in positioning, caused by mechanical limitations of the robotic systems. In this article we will present a comparison of established reconstruction methods for CT with industrial robots using a so-called Automatic Object Position Recognition (AOPR). AOPR is a new automatic method which improves the position-accuracy online by using a priori information about fix markers in space. The markers are used to reconstruct the position of the object during each image acquisition. These more precise positions lead to a higher quality of the reconstructed volume after the image reconstruction. We will study the image quality of several different reconstruction techniques. For example we will reconstruct real robot-CT datasets by filtered back-projection (FBP), simultaneous algebraic reconstruction technique (SART) or Siemens's theoretically exact reconstruction (TXR). Each time, we will evaluate the datasets with and without AOPR and will present the dedicated image quality. Moreover we will measure the computation time of AOPR to proof that we still fulfill the real time conditions.

  13. Image segmentation for enhancing symbol recognition in prosthetic vision.

    Science.gov (United States)

    Horne, Lachlan; Barnes, Nick; McCarthy, Chris; He, Xuming

    2012-01-01

    Current and near-term implantable prosthetic vision systems offer the potential to restore some visual function, but suffer from poor resolution and dynamic range of induced phosphenes. This can make it difficult for users of prosthetic vision systems to identify symbolic information (such as signs) except in controlled conditions. Using image segmentation techniques from computer vision, we show it is possible to improve the clarity of such symbolic information for users of prosthetic vision implants in uncontrolled conditions. We use image segmentation to automatically divide a natural image into regions, and using a fixation point controlled by the user, select a region to phosphenize. This technique improves the apparent contrast and clarity of symbolic information over traditional phosphenization approaches.

  14. Computational studies of protein-ligand molecular recognition

    NARCIS (Netherlands)

    Gillies, M.B.

    2001-01-01

    Structure-based drug design is made possible by our understanding of molecular recognition. The utility of this approach was apparent in the development of the clinically e V ective HIV-1 PR inhibitors, where crystal structures of complexes of HIV-1 protease and inhibitors gave pivotal information.

  15. Teach Your Computer to Read: Scanners and Optical Character Recognition.

    Science.gov (United States)

    Marsden, Jim

    1993-01-01

    Desktop scanners can be used with a software technology called optical character recognition (OCR) to convert the text on virtually any paper document into an electronic form. OCR offers educators new flexibility in incorporating text into tests, lesson plans, and other materials. (MLF)

  16. A Rhythm Recognition Computer Program to Advocate Interactivist Perception

    Science.gov (United States)

    Buisson, Jean-Christophe

    2004-01-01

    This paper advocates the main ideas of the interactive model of representation of Mark Bickhard and the assimilation/accommodation framework of Jean Piaget, through a rhythm recognition demonstration program. Although completely unsupervised, the program progressively learns to recognize more and more complex rhythms struck on the user's keyboard.…

  17. The Impact of Image Quality on the Performance of Face Recognition

    NARCIS (Netherlands)

    Dutta, A.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    The performance of a face recognition system depends on the quality of both test and reference images participating in the face comparison process. In a forensic evaluation case involving face recognition, we do not have any control over the quality of the trace (image captured by a CCTV at a crime

  18. Customized Computer Vision and Sensor System for Colony Recognition and Live Bacteria Counting in Agriculture

    Directory of Open Access Journals (Sweden)

    Gabriel M. ALVES

    2016-06-01

    Full Text Available This paper presents an arrangement based on a dedicated computer and charge-coupled device (CCD sensor system to intelligently allow the counting and recognition of colony formation. Microbes in agricultural environments are important catalysts of global carbon and nitrogen cycles, including the production and consumption of greenhouse gases in soil. Some microbes produce greenhouse gases such as carbon dioxide and nitrous oxide while decomposing organic matter in soil. Others consume methane from the atmosphere, helping to mitigate climate change. The magnitude of each of these processes is influenced by human activities and impacts the warming potential of Earth’s atmosphere. In this context, bacterial colony counting is important and requires sophisticated analysis methods. The method implemented in this study uses digital image processing techniques, including the Hough Transform for circular objects. The visual environment Borland Builder C++ was used for development, and a model for decision making was incorporated to aggregate intelligence. For calibration of the method a prepared illuminated chamber was used to enable analyses of the bacteria Escherichia coli, and Acidithiobacillus ferrooxidans. For validation, a set of comparisons were established between this smart method and the expert analyses. The results show the potential of this method for laboratory applications that involve the quantification and pattern recognition of bacterial colonies in solid culture environments.

  19. Poka Yoke system based on image analysis and object recognition

    Science.gov (United States)

    Belu, N.; Ionescu, L. M.; Misztal, A.; Mazăre, A.

    2015-11-01

    Poka Yoke is a method of quality management which is related to prevent faults from arising during production processes. It deals with “fail-sating” or “mistake-proofing”. The Poka-yoke concept was generated and developed by Shigeo Shingo for the Toyota Production System. Poka Yoke is used in many fields, especially in monitoring production processes. In many cases, identifying faults in a production process involves a higher cost than necessary cost of disposal. Usually, poke yoke solutions are based on multiple sensors that identify some nonconformities. This means the presence of different equipment (mechanical, electronic) on production line. As a consequence, coupled with the fact that the method itself is an invasive, affecting the production process, would increase its price diagnostics. The bulky machines are the means by which a Poka Yoke system can be implemented become more sophisticated. In this paper we propose a solution for the Poka Yoke system based on image analysis and identification of faults. The solution consists of a module for image acquisition, mid-level processing and an object recognition module using associative memory (Hopfield network type). All are integrated into an embedded system with AD (Analog to Digital) converter and Zync 7000 (22 nm technology).

  20. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras.

    Science.gov (United States)

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-03-16

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.

  1. Parallel processing and VLSI architectures for syntactic pattern recognition and image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Y.T.P.

    1982-01-01

    Computation speed of syntatic pattern recognition and image analysis algorithms have always been regarded as slow. Several parallel processing techniques are proposed, especially for the syntatic analyzer, to speed up the computation. The distance calculations between strings and trees have been implemented on three different parallel processing systems, namely, the SIMD system, the dedicated SIMD system and the MIMD system. The results show that distance calculation can be sped up when it is implemented on a parallel computer. Earley's algorithm has wide applications in many fields. A parallel Earley's algorithm is proposed, and the recognition algorithm is implemented on a VLSI architecture, the parse extraction algorithm and the complete algorithm on a processor array. This parallel execution only takes linear time. Simulation results prove the correctness of this design. The same Earley's algorithm has been extended to process erroneous input data. This error-correcting syntatic recognizer has also been implemented on a VLSI system. The results from the simulation not only prove the correctness of this design, but also indicate that this recognizer can be used to classify patterns.

  2. Orange Recognition on Tree Using Image Processing Method Based on Lighting Density Pattern

    Directory of Open Access Journals (Sweden)

    H. R Ahmadi

    2015-03-01

    Full Text Available Within the last few years, a new tendency has been created towards robotic harvesting of oranges and some of citrus fruits. The first step in robotic harvesting is accurate recognition and positioning of fruits. Detection through image processing by color cameras and computer is currently the most common method. Obviously, a harvesting robot faces with natural conditions and, therefore, detection must be done in various light conditions and environments. In this study, it was attempted to provide a suitable algorithm for recognizing the orange fruits on tree. In order to evaluate the proposed algorithm, 500 images were taken in different conditions of canopy, lighting and the distance to the tree. The algorithm included sub-routines for optimization, segmentation, size filtering, separation of fruits based on lighting density method and coordinates determination. In this study, MLP neural network (with 3 hidden layers was used for segmentation that was found to be successful with an accuracy of 88.2% in correct detection. As there exist a high percentage of the clustered oranges in images, any algorithm aiming to detect oranges on the trees successfully should offer a solution to separate these oranges first. A new method based on the light and shade density method was applied and evaluated in this research. Finally, the accuracies for differentiation and recognition were obtained to be 89.5% and 88.2%, respectively.

  3. POAC (programmable optical array computer) applied for target recognition and tracking

    Science.gov (United States)

    Tokes, Szabolcs; Orzo, Laszlo; Ayoub, Ahmed E.; Roska, Tamas

    2004-12-01

    A portable programmable opto-electronic analogic CNN computer (Laptop-POAC) has been built and used to recognize and track targets. Its kernel processor is a novel type of high performance optical correlator based on the use of bacteriorhodopsin (BR) as a dynamic holographic material. This optical CNN implementation combines the optical computer's high speed, high parallelism (~106 channel) and large applicable template sizes with flexible programmability of the CNN devices. Unique feature of this optical array computer is that programming templates can be applied either by a 2D acousto-optical deflector (up to 64x64 pixel size templates) incoherently or by an LCD-SLM (up to 128x128 size templates) coherently. So it can work both in totally coherent and partially incoherent way, utilizing the actual advantages of the used mode of operation. Input images are fed-in by a second LCD-SLM of 600x800 pixel resolution. Evaluation of trade-off between speed and resolution is given. Novel and effective target recognition and multiple-target-tracking algorithms have been developed for the POAC. Tracking experiments are demonstrated. Collision avoidance experiments are being conducted. In the present model a CCD camera is recording the correlograms, however, later a CNN-UM chip and a high-speed CMOS camera will be applied for post-processing.

  4. Recognition

    DEFF Research Database (Denmark)

    Gimmler, Antje

    2017-01-01

    In this article, I shall examine the cognitive, heuristic and theoretical functions of the concept of recognition. To evaluate both the explanatory power and the limitations of a sociological concept, the theory construction must be analysed and its actual productivity for sociological theory must...... be evaluated. In the first section, I will introduce the concept of recognition as a travelling concept playing a role both on the intellectual stage and in real life. In the second section, I will concentrate on the presentation of Honneth’s theory of recognition, emphasizing the construction of the concept...... and its explanatory power. Finally, I will discuss Honneth’s concept in relation to the critique that has been raised, addressing the debate between Honneth and Fraser. In a short conclusion, I will return to the question of the explanatory power of the concept of recognition....

  5. Realization for Chinese vehicle license plate recognition based on computer vision and fuzzy neural network

    Science.gov (United States)

    Yang, Yun; Zhang, Weigang; Guo, Pan

    2010-07-01

    The proposed approach in this paper is divided into three steps namely the location of plate, the segmentation of the characters and the recognition of the characters. The location algorithm is firstly consisted of two video captures to get high quality images, and estimates the size of vehicle plate in these images via parallel binocular stereo vision algorithm. Then the segmentation method extracts the edge of vehicle plate based on second generation non-orthogonal Haar wavelet transformation, and locates the vehicle plate according to the estimated result in the first step. Finally, the recognition algorithm is realized based on the Radial Basis Function Fuzzy Neural Network. Experiments have been conducted for real images. The results show this method can decrease the error recognition rate of Chinese license plate recognition.

  6. Robust Face Recognition using Voting by Bit-plane Images based on Sparse Representation

    Directory of Open Access Journals (Sweden)

    Dongmei Wei

    2015-08-01

    Full Text Available Plurality voting is widely employed as combination strategies in pattern recognition. As a technology proposed recently, sparse representation based classification codes the query image as a sparse linear combination of entire training images and classifies the query sample class by class exploiting the class representation error. In this paper, an improvement face recognition approach using sparse representation and plurality voting based on the binary bit-plane images is proposed. After being equalized, gray images are decomposed into eight bit-plane images, sparse representation based classification is exploited respectively on the five bit-plane images that have more discrimination information. Finally, the true identity of query image is voted by these five identities obtained. Experiment results shown that this proposed approach is preferable both in recognition accuracy and in recognition speed.

  7. FPGA IMPLEMENTATION OF ADAPTIVE INTEGRATED SPIKING NEURAL NETWORK FOR EFFICIENT IMAGE RECOGNITION SYSTEM

    Directory of Open Access Journals (Sweden)

    T. Pasupathi

    2014-05-01

    Full Text Available Image recognition is a technology which can be used in various applications such as medical image recognition systems, security, defense video tracking, and factory automation. In this paper we present a novel pipelined architecture of an adaptive integrated Artificial Neural Network for image recognition. In our proposed work we have combined the feature of spiking neuron concept with ANN to achieve the efficient architecture for image recognition. The set of training images are trained by ANN and target output has been identified. Real time videos are captured and then converted into frames for testing purpose and the image were recognized. The machine can operate at up to 40 frames/sec using images acquired from the camera. The system has been implemented on XC3S400 SPARTAN-3 Field Programmable Gate Arrays.

  8. USE OF IMAGE ENHANCEMENT TECHNIQUES FOR IMPROVING REAL TIME FACE RECOGNITION EFFICIENCY ON WEARABLE GADGETS

    Directory of Open Access Journals (Sweden)

    MUHAMMAD EHSAN RANA

    2017-01-01

    Full Text Available The objective of this research is to study the effects of image enhancement techniques on face recognition performance of wearable gadgets with an emphasis on recognition rate.In this research, a number of image enhancement techniques are selected that include brightness normalization, contrast normalization, sharpening, smoothing, and various combinations of these. Subsequently test images are obtained from AT&T database and Yale Face Database B to investigate the effect of these image enhancement techniques under various conditions such as change of illumination and face orientation and expression.The evaluation of data, collected during this research, revealed that the effect of image pre-processing techniques on face recognition highly depends on the illumination condition under which these images are taken. It is revealed that the benefit of applying image enhancement techniques on face images is best seen when there is high variation of illumination among images. Results also indicate that highest recognition rate is achieved when images are taken under low light condition and image contrast is enhanced using histogram equalization technique and then image noise is reduced using median smoothing filter. Additionally combination of contrast normalization and mean smoothing filter shows good result in all scenarios. Results obtained from test cases illustrate up to 75% improvement in face recognition rate when image enhancement is applied to images in given scenarios.

  9. Computational ghost imaging using deep learning

    Science.gov (United States)

    Shimobaba, Tomoyoshi; Endo, Yutaka; Nishitsuji, Takashi; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Shiraki, Atsushi; Ito, Tomoyoshi

    2018-04-01

    Computational ghost imaging (CGI) is a single-pixel imaging technique that exploits the correlation between known random patterns and the measured intensity of light transmitted (or reflected) by an object. Although CGI can obtain two- or three-dimensional images with a single or a few bucket detectors, the quality of the reconstructed images is reduced by noise due to the reconstruction of images from random patterns. In this study, we improve the quality of CGI images using deep learning. A deep neural network is used to automatically learn the features of noise-contaminated CGI images. After training, the network is able to predict low-noise images from new noise-contaminated CGI images.

  10. Computational acceleration for MR image reconstruction in partially parallel imaging.

    Science.gov (United States)

    Ye, Xiaojing; Chen, Yunmei; Huang, Feng

    2011-05-01

    In this paper, we present a fast numerical algorithm for solving total variation and l(1) (TVL1) based image reconstruction with application in partially parallel magnetic resonance imaging. Our algorithm uses variable splitting method to reduce computational cost. Moreover, the Barzilai-Borwein step size selection method is adopted in our algorithm for much faster convergence. Experimental results on clinical partially parallel imaging data demonstrate that the proposed algorithm requires much fewer iterations and/or less computational cost than recently developed operator splitting and Bregman operator splitting methods, which can deal with a general sensing matrix in reconstruction framework, to get similar or even better quality of reconstructed images.

  11. Investigating an Innovative Computer Application to Improve L2 Word Recognition from Speech

    Science.gov (United States)

    Matthews, Joshua; O'Toole, John Mitchell

    2015-01-01

    The ability to recognise words from the aural modality is a critical aspect of successful second language (L2) listening comprehension. However, little research has been reported on computer-mediated development of L2 word recognition from speech in L2 learning contexts. This report describes the development of an innovative computer application…

  12. Why the long face? The importance of vertical image structure for biological "barcodes" underlying face recognition.

    Science.gov (United States)

    Spence, Morgan L; Storrs, Katherine R; Arnold, Derek H

    2014-07-29

    Humans are experts at face recognition. The mechanisms underlying this complex capacity are not fully understood. Recently, it has been proposed that face recognition is supported by a coarse-scale analysis of visual information contained in horizontal bands of contrast distributed along the vertical image axis-a biological facial "barcode" (Dakin & Watt, 2009). A critical prediction of the facial barcode hypothesis is that the distribution of image contrast along the vertical axis will be more important for face recognition than image distributions along the horizontal axis. Using a novel paradigm involving dynamic image distortions, a series of experiments are presented examining famous face recognition impairments from selectively disrupting image distributions along the vertical or horizontal image axes. Results show that disrupting the image distribution along the vertical image axis is more disruptive for recognition than matched distortions along the horizontal axis. Consistent with the facial barcode hypothesis, these results suggest that human face recognition relies disproportionately on appropriately scaled distributions of image contrast along the vertical image axis. © 2014 ARVO.

  13. Gesture Recognition by Computer Vision : An Integral Approach

    NARCIS (Netherlands)

    Lichtenauer, J.F.

    2009-01-01

    The fundamental objective of this Ph.D. thesis is to gain more insight into what is involved in the practical application of a computer vision system, when the conditions of use cannot be controlled completely. The basic assumption is that research on isolated aspects of computer vision often leads

  14. Biomedical Imaging and Computational Modeling in Biomechanics

    CERN Document Server

    Iacoviello, Daniela

    2013-01-01

    This book collects the state-of-art and new trends in image analysis and biomechanics. It covers a wide field of scientific and cultural topics, ranging from remodeling of bone tissue under the mechanical stimulus up to optimizing the performance of sports equipment, through the patient-specific modeling in orthopedics, microtomography and its application in oral and implant research, computational modeling in the field of hip prostheses, image based model development and analysis of the human knee joint, kinematics of the hip joint, micro-scale analysis of compositional and mechanical properties of dentin, automated techniques for cervical cell image analysis, and iomedical imaging and computational modeling in cardiovascular disease.   The book will be of interest to researchers, Ph.D students, and graduate students with multidisciplinary interests related to image analysis and understanding, medical imaging, biomechanics, simulation and modeling, experimental analysis.

  15. Computational biomechanics for medicine imaging, modeling and computing

    CERN Document Server

    Doyle, Barry; Wittek, Adam; Nielsen, Poul; Miller, Karol

    2016-01-01

    The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologies and advancements. This volume comprises eighteen of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, UK, Switzerland, Scotland, France and Russia. Some of the interesting topics discussed are: tailored computational models; traumatic brain injury; soft-tissue mechanics; medical image analysis; and clinically-relevant simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.

  16. Exploring the feasibility of iris recognition for visible spectrum iris images obtained using smartphone camera

    Science.gov (United States)

    Trokielewicz, Mateusz; Bartuzi, Ewelina; Michowska, Katarzyna; Andrzejewska, Antonina; Selegrat, Monika

    2015-09-01

    In the age of modern, hyperconnected society that increasingly relies on mobile devices and solutions, implementing a reliable and accurate biometric system employing iris recognition presents new challenges. Typical biometric systems employing iris analysis require expensive and complicated hardware. We therefore explore an alternative way using visible spectrum iris imaging. This paper aims at answering several questions related to applying iris biometrics for images obtained in the visible spectrum using smartphone camera. Can irides be successfully and effortlessly imaged using a smartphone's built-in camera? Can existing iris recognition methods perform well when presented with such images? The main advantage of using near-infrared (NIR) illumination in dedicated iris recognition cameras is good performance almost independent of the iris color and pigmentation. Are the images obtained from smartphone's camera of sufficient quality even for the dark irides? We present experiments incorporating simple image preprocessing to find the best visibility of iris texture, followed by a performance study to assess whether iris recognition methods originally aimed at NIR iris images perform well with visible light images. To our best knowledge this is the first comprehensive analysis of iris recognition performance using a database of high-quality images collected in visible light using the smartphones flashlight together with the application of commercial off-the-shelf (COTS) iris recognition methods.

  17. Structural characterisation of semiconductors by computer methods of image analysis

    Science.gov (United States)

    Hernández-Fenollosa, M. A.; Cuesta-Frau, D.; Damonte, L. C.; Satorre Aznar, M. A.

    2005-08-01

    Analysis of microscopic images for automatic particle detection and extraction is a field of growing interest in many scientific fields such as biology, medicine and physics. In this paper we present a method to analyze microscopic images of semiconductors in order to, in a non-supervised way, obtain the main characteristics of the sample under test: growing regions, grain sizes, dendrite morphology and homogenization. In particular, nanocrystalline semiconductors with dimension less than 100 nm represent a relatively new class of materials. Their short-range structures are essentially the same as bulk semiconductors but their optical and electronic properties are dramatically different. The images are obtained by scanning electron microscopy (SEM) and processed by the computer methods presented. Traditionally these tasks have been performed manually, which is time-consuming and subjective in contrast to our computer analysis. The images acquired are first pre-processed in order to improve the signal-to-noise ratio and therefore the detection rate. Images are filtered by a weighted-median filter, and contrast is enhanced using histogram equalization. Then, images are thresholded using a binarization algorithm in such a way growing regions will be segmented. This segmentation is based on the different grey levels due to different sample height of the growing areas. Next, resulting image is further processed to eliminate the resulting holes and spots of the previous stage, and this image will be used to compute the percentage of such growing areas. Finally, using pattern recognition techniques (contour following and raster to vector transformation), single crystals are extracted to obtain their characteristics.

  18. RRAM-based parallel computing architecture using k-nearest neighbor classification for pattern recognition.

    Science.gov (United States)

    Jiang, Yuning; Kang, Jinfeng; Wang, Xinan

    2017-03-24

    Resistive switching memory (RRAM) is considered as one of the most promising devices for parallel computing solutions that may overcome the von Neumann bottleneck of today's electronic systems. However, the existing RRAM-based parallel computing architectures suffer from practical problems such as device variations and extra computing circuits. In this work, we propose a novel parallel computing architecture for pattern recognition by implementing k-nearest neighbor classification on metal-oxide RRAM crossbar arrays. Metal-oxide RRAM with gradual RESET behaviors is chosen as both the storage and computing components. The proposed architecture is tested by the MNIST database. High speed (~100 ns per example) and high recognition accuracy (97.05%) are obtained. The influence of several non-ideal device properties is also discussed, and it turns out that the proposed architecture shows great tolerance to device variations. This work paves a new way to achieve RRAM-based parallel computing hardware systems with high performance.

  19. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  20. Enhanced Gender Recognition System Using an Improved Histogram of Oriented Gradient (HOG) Feature from Quality Assessment of Visible Light and Thermal Images of the Human Body.

    Science.gov (United States)

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-07-21

    With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images.

  1. Enhanced Gender Recognition System Using an Improved Histogram of Oriented Gradient (HOG Feature from Quality Assessment of Visible Light and Thermal Images of the Human Body

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2016-07-01

    Full Text Available With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG, which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images.

  2. Study of check image using computed radiography

    International Nuclear Information System (INIS)

    Sato, Hiroshi

    2002-01-01

    There are two image forming methods both a check image and a portal image in the linacogram. It has been established the image forming method in the check image using computed radiography (CR). On the other hand, it is not established the image forming method in the portal image using CR yet. Usually, in the electric portal imaging device (EPID) is mainly used just before radiotherapy start. The usefulness of the portal image forming method by CR using in place of EPID is possible to confirm the precision for determining to specific position at the irradiate part and to the irradiate method for the human organs. There are some technical problems that, since in the early time, the linac graphy (LG) image have low resolution power. In order to improve to the resolution power in LG image, CR image technologies have been introduced to the check image forming method. Heavy metallic sheet (HMS) is used to the front side of CR-IP cassette, and high contactness sponge is used to the back side of the cassette. Improved contactness between HMS and imaging plate (IP) by means of the high contactness sponge contributed to improve the resolution power in the check images. A lot of paper which is connected with these information have been reported. Imaging plate ST-III should be used to maintain high sensitivity in the check film image forming method. The same image forming method in the check image established by CR has been introduced into the portal image forming method in order to improve the resolution power. However, as a result, it couldn't acquired high resolution image forming in the portal images because of the combination of ST-III and radiotherapy dose. After several trials, it has been recognized that HR-V imaging plate for mammography is the most useful application to maintain high resolution power in the portal images. Also, it is possible to modify the image quality by changing GS parameter which is one of image processing parameters in CR. Furthermore, in case

  3. A CNN Computing Algorithm for Image Correlation

    Directory of Open Access Journals (Sweden)

    ŢEPELEA Laviniu

    2010-10-01

    Full Text Available To computing the correlationcoefficients between two images, this paper proposesan algorithm based on the use of cellular neuralnetworks (CNNs, in which most operations(calculations are achieved by parallel processing.Thus, on the one hand, we can reduce computing time;on the other hand, the computing time will notincrease proportionally with increasing the size of thetemplate images. By integrating the CNN algorithm ona emulated digital CNN-Universal Machineimplemented on FPGA (Field Programmable GateArray there will be possible to perform some task inreal time in the case of a system developed to assistpeople with visual impairments or in a medicaldiagnosis assistance system, for processing andanalysis of computer tomography images.

  4. Computers in Public Schools: Changing the Image with Image Processing.

    Science.gov (United States)

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  5. Computational efficiency improvements for image colorization

    Science.gov (United States)

    Yu, Chao; Sharma, Gaurav; Aly, Hussein

    2013-03-01

    We propose an efficient algorithm for colorization of greyscale images. As in prior work, colorization is posed as an optimization problem: a user specifies the color for a few scribbles drawn on the greyscale image and the color image is obtained by propagating color information from the scribbles to surrounding regions, while maximizing the local smoothness of colors. In this formulation, colorization is obtained by solving a large sparse linear system, which normally requires substantial computation and memory resources. Our algorithm improves the computational performance through three innovations over prior colorization implementations. First, the linear system is solved iteratively without explicitly constructing the sparse matrix, which significantly reduces the required memory. Second, we formulate each iteration in terms of integral images obtained by dynamic programming, reducing repetitive computation. Third, we use a coarseto- fine framework, where a lower resolution subsampled image is first colorized and this low resolution color image is upsampled to initialize the colorization process for the fine level. The improvements we develop provide significant speedup and memory savings compared to the conventional approach of solving the linear system directly using off-the-shelf sparse solvers, and allow us to colorize images with typical sizes encountered in realistic applications on typical commodity computing platforms.

  6. Computational Imaging in Demanding Conditions

    Science.gov (United States)

    2015-11-18

    Students, 2 M.S. Students   ● Presented many  plenary  and keynote  speeches , including:  ○ (2015) ​ Plenary ​, International Conference on Multimedia and...hybrid imaging systems, Paris  ○ (2014) ​ Plenary ​, SPIE  Optics and Photonics Symposium, San Diego   ○ (2014) ​ Plenary ​, Technion’s TCE Symposium, Haifa...2013) ​ Plenary ​, Picture Coding Symposium  ○ (2012) ​ Plenary ​, Mathematics and Image Analysis Conference, Paris  ○ (2010) ​Keynote​, Pacific Rim

  7. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2018-02-01

    Full Text Available Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples. Therefore, a presentation attack detection (PAD method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP, local ternary pattern (LTP, and histogram of oriented gradients (HOG. As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN method to extract deep image features and the multi-level local binary pattern (MLBP method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  8. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    Science.gov (United States)

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-01-01

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417

  9. Computer assisted visualization of digital mammography images

    International Nuclear Information System (INIS)

    Funke, M.; Breiter, N.; Grabbe, E.; Netsch, T.; Biehl, M.; Peitgen, H.O.

    1999-01-01

    Purpose: In a clinical study, the feasibility of using a mammography workstation for the display and interpretation of digital mammography images was evaluated and the results were compared with the corresponding laser film hard copies. Materials and Methods: Digital phosphorous plate radiographs of the entire breast were obtained in 30 patients using a direct magnification mammography system. The images were displayed for interpretation on the computer monitor of a dedicated mammography workstation and also presented as laser film hard copies on a film view box for comparison. The images were evaluted with respect to the image handling, the image quality and the visualization of relevant structures by 3 readers. Results: Handling and contrast of the monitor displayed images were found to be superior compared with the film hard copies. Image noise was found in some cases but did not compromise the interpretation of the monitor images. The visualization of relevant structures was equal with both modalities. Altogether, image interpretation with the mammography workstation was considered to be easy, quick and confident. Conclusions: Computer-assisted visualization and interpretation of digital mammography images using a dedicated workstation can be performed with sufficiently high diagnostic accuracy. (orig.) [de

  10. Studies on computer analysis for radioisotope images

    International Nuclear Information System (INIS)

    Takizawa, Masaomi

    1977-01-01

    A hybrid type image file and processing system are devised by the author to file and the radioisotope image processing with analog display. This system has some functions as follows; Ten thousand images can be stored in 60 feets length video-tape-recorder (VTR) tape. Maximum of an image on the VTR tape is within 15 sec. An image display enabled by the analog memory, which has brightness more than 15 gray levels. By using the analog memories, effective image processing can be done by the small computer. Many signal sources can be inputted into the hybrid system. This system can be applied many fields to both routine works and multi-purpose radioisotope image processing. (auth.)

  11. An evaluation of open set recognition for FLIR images

    Science.gov (United States)

    Scherreik, Matthew; Rigling, Brian

    2015-05-01

    Typical supervised classification algorithms label inputs according to what was learned in a training phase. Thus, test inputs that were not seen in training are always given incorrect labels. Open set recognition algorithms address this issue by accounting for inputs that are not present in training and providing the classifier with an option to reject" unknown samples. A number of such techniques have been developed in the literature, many of which are based on support vector machines (SVMs). One approach, the 1-vs-set machine, constructs a slab" in feature space using the SVM hyperplane. Inputs falling on one side of the slab or within the slab belong to a training class, while inputs falling on the far side of the slab are rejected. We note that rejection of unknown inputs can be achieved by thresholding class posterior probabilities. Another recently developed approach, the Probabilistic Open Set SVM (POS-SVM), empirically determines good probability thresholds. We apply the 1-vs-set machine, POS-SVM, and closed set SVMs to FLIR images taken from the Comanche SIG dataset. Vehicles in the dataset are divided into three general classes: wheeled, armored personnel carrier (APC), and tank. For each class, a coarse pose estimate (front, rear, left, right) is taken. In a closed set sense, we analyze these algorithms for prediction of vehicle class and pose. To test open set performance, one or more vehicle classes are held out from training. By considering closed and open set performance separately, we may closely analyze both inter-class discrimination and threshold effectiveness.

  12. Sparse Representations of Image Gradient Orientations for Visual Recognition and Tracking

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    Recent results have shown that sparse linear representations of a query object with respect to an overcomplete basis formed by the entire gallery of objects of interest can result in powerful image-based object recognition schemes. In this paper, we propose a framework for visual recognition and

  13. The effect of image resolution on the performance of a face recognition system

    NARCIS (Netherlands)

    Boom, B.J.; Beumer, G.M.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2006-01-01

    In this paper we investigate the effect of image resolution on the error rates of a face verification system. We do not restrict ourselves to the face recognition algorithm only, but we also consider the face registration. In our face recognition system, the face registration is done by finding

  14. Eyes on emergence : Fast detection yet slow recognition of emerging images

    NARCIS (Netherlands)

    Nordhjem, Barbara; Petrozzelli, Constanza I. Kurman; Gravel, Nicolas; Renken, Remco J.; Cornelissen, Frans W.

    Visual object recognition occurs at the intersection of visual perception and visual cognition. It typically occurs very fast and it has therefore been difficult to disentangle its constituent processes. Recognition time can be extended when using images with emergent properties, suggesting they may

  15. Sparse Image Reconstruction in Computed Tomography

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Sauer

    In recent years, increased focus on the potentially harmful effects of x-ray computed tomography (CT) scans, such as radiation-induced cancer, has motivated research on new low-dose imaging techniques. Sparse image reconstruction methods, as studied for instance in the field of compressed sensing...... and limitations of sparse reconstruction methods in CT, in particular in a quantitative sense. For example, relations between image properties such as contrast, structure and sparsity, tolerable noise levels, suficient sampling levels, the choice of sparse reconstruction formulation and the achievable image...

  16. Proceedings of the workshop. Recognition of DNA damage as onset of successful repair. Computational and experimental approaches

    International Nuclear Information System (INIS)

    Pinak, Miroslav

    2002-03-01

    This was held at The Tokai Research Establishment, Japan Atomic Energy Research Institute, on the 18th and 19th of December 2001. The Laboratory of Radiation Risk Analysis of JAERI organized the workshop. The main subject of the workshop was the DNA damage and its repair. Presented works described the leading experimental as well computational approaches, focusing mainly on the formation of DNA damage, its proliferation, enzymatic recognition and repair, and finally imaging and detection of lesions on a DNA molecule. The 19 of the presented papers are indexed individually. (J.P.N.)

  17. Fish species recognition using computer vision and a neural network

    NARCIS (Netherlands)

    Storbeck, F.; Daan, B.

    2001-01-01

    A system is described to recognize fish species by computer vision and a neural network program. The vision system measures a number of features of fish as seen by a camera perpendicular to a conveyor belt. The features used here are the widths and heights at various locations along the fish. First

  18. Determination of the Image Complexity Feature in Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Veacheslav L. Perju

    2003-11-01

    Full Text Available The new image complexity informative feature is proposed. The experimental estimation of the image complexity is carried out. There are elaborated two optical-electronic processors for image complexity calculation. The determination of the necessary number of the image's digitization elements depending on the image complexity was carried out. The accuracy of the image complexity feature calculation was made.

  19. MO-F-CAMPUS-J-02: Automatic Recognition of Patient Treatment Site in Portal Images Using Machine Learning

    International Nuclear Information System (INIS)

    Chang, X; Yang, D

    2015-01-01

    Purpose: To investigate the method to automatically recognize the treatment site in the X-Ray portal images. It could be useful to detect potential treatment errors, and to provide guidance to sequential tasks, e.g. automatically verify the patient daily setup. Methods: The portal images were exported from MOSAIQ as DICOM files, and were 1) processed with a threshold based intensity transformation algorithm to enhance contrast, and 2) where then down-sampled (from 1024×768 to 128×96) by using bi-cubic interpolation algorithm. An appearance-based vector space model (VSM) was used to rearrange the images into vectors. A principal component analysis (PCA) method was used to reduce the vector dimensions. A multi-class support vector machine (SVM), with radial basis function kernel, was used to build the treatment site recognition models. These models were then used to recognize the treatment sites in the portal image. Portal images of 120 patients were included in the study. The images were selected to cover six treatment sites: brain, head and neck, breast, lung, abdomen and pelvis. Each site had images of the twenty patients. Cross-validation experiments were performed to evaluate the performance. Results: MATLAB image processing Toolbox and scikit-learn (a machine learning library in python) were used to implement the proposed method. The average accuracies using the AP and RT images separately were 95% and 94% respectively. The average accuracy using AP and RT images together was 98%. Computation time was ∼0.16 seconds per patient with AP or RT image, ∼0.33 seconds per patient with both of AP and RT images. Conclusion: The proposed method of treatment site recognition is efficient and accurate. It is not sensitive to the differences of image intensity, size and positions of patients in the portal images. It could be useful for the patient safety assurance. The work was partially supported by a research grant from Varian Medical System

  20. MO-F-CAMPUS-J-02: Automatic Recognition of Patient Treatment Site in Portal Images Using Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    Chang, X; Yang, D [Washington University in St Louis, St Louis, MO (United States)

    2015-06-15

    Purpose: To investigate the method to automatically recognize the treatment site in the X-Ray portal images. It could be useful to detect potential treatment errors, and to provide guidance to sequential tasks, e.g. automatically verify the patient daily setup. Methods: The portal images were exported from MOSAIQ as DICOM files, and were 1) processed with a threshold based intensity transformation algorithm to enhance contrast, and 2) where then down-sampled (from 1024×768 to 128×96) by using bi-cubic interpolation algorithm. An appearance-based vector space model (VSM) was used to rearrange the images into vectors. A principal component analysis (PCA) method was used to reduce the vector dimensions. A multi-class support vector machine (SVM), with radial basis function kernel, was used to build the treatment site recognition models. These models were then used to recognize the treatment sites in the portal image. Portal images of 120 patients were included in the study. The images were selected to cover six treatment sites: brain, head and neck, breast, lung, abdomen and pelvis. Each site had images of the twenty patients. Cross-validation experiments were performed to evaluate the performance. Results: MATLAB image processing Toolbox and scikit-learn (a machine learning library in python) were used to implement the proposed method. The average accuracies using the AP and RT images separately were 95% and 94% respectively. The average accuracy using AP and RT images together was 98%. Computation time was ∼0.16 seconds per patient with AP or RT image, ∼0.33 seconds per patient with both of AP and RT images. Conclusion: The proposed method of treatment site recognition is efficient and accurate. It is not sensitive to the differences of image intensity, size and positions of patients in the portal images. It could be useful for the patient safety assurance. The work was partially supported by a research grant from Varian Medical System.

  1. Proceedings of the NASA Symposium on Mathematical Pattern Recognition and Image Analysis

    Science.gov (United States)

    Guseman, L. F., Jr.

    1983-01-01

    The application of mathematical and statistical analyses techniques to imagery obtained by remote sensors is described by Principal Investigators. Scene-to-map registration, geometric rectification, and image matching are among the pattern recognition aspects discussed.

  2. Pornographic image recognition and filtering using incremental learning in compressed domain

    Science.gov (United States)

    Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao

    2015-11-01

    With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.

  3. Speeding up image reconstruction in computed tomography

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Computed tomography (CT) is a technique for imaging cross-sections of an object using X-ray measurements taken from different angles. In last decades a significant progress has happened there: today advanced algorithms allow fast image reconstruction and obtaining high-quality images even with missing or dirty data, modern detectors provide high resolution without increasing radiation dose, and high-performance multi-core computing devices are there to help us solving such tasks even faster. I will start with CT basics, then briefly present existing classes of reconstruction algorithms and their differences. After that I will proceed to employing distinctive architectural features of modern multi-core devices (CPUs and GPUs) and popular program interfaces (OpenMP, MPI, CUDA, OpenCL) for developing effective parallel realizations of image reconstruction algorithms. Decreasing full reconstruction time from long hours up to minutes or even seconds has a revolutionary impact in diagnostic medicine and industria...

  4. DISC: Deep Image Saliency Computing via Progressive Representation Learning.

    Science.gov (United States)

    Chen, Tianshui; Lin, Liang; Liu, Lingbo; Luo, Xiaonan; Li, Xuelong

    2016-06-01

    Salient object detection increasingly receives attention as an important component or step in several pattern recognition and image processing tasks. Although a variety of powerful saliency models have been intensively proposed, they usually involve heavy feature (or model) engineering based on priors (or assumptions) about the properties of objects and backgrounds. Inspired by the effectiveness of recently developed feature learning, we provide a novel deep image saliency computing (DISC) framework for fine-grained image saliency computing. In particular, we model the image saliency from both the coarse-and fine-level observations, and utilize the deep convolutional neural network (CNN) to learn the saliency representation in a progressive manner. In particular, our saliency model is built upon two stacked CNNs. The first CNN generates a coarse-level saliency map by taking the overall image as the input, roughly identifying saliency regions in the global context. Furthermore, we integrate superpixel-based local context information in the first CNN to refine the coarse-level saliency map. Guided by the coarse saliency map, the second CNN focuses on the local context to produce fine-grained and accurate saliency map while preserving object details. For a testing image, the two CNNs collaboratively conduct the saliency computing in one shot. Our DISC framework is capable of uniformly highlighting the objects of interest from complex background while preserving well object details. Extensive experiments on several standard benchmarks suggest that DISC outperforms other state-of-the-art methods and it also generalizes well across data sets without additional training. The executable version of DISC is available online: http://vision.sysu.edu.cn/projects/DISC.

  5. Visual memory for fixated regions of natural images dissociates attraction and recognition.

    Science.gov (United States)

    van der Linde, Ian; Rajashekar, Umesh; Bovik, Alan C; Cormack, Lawrence K

    2009-01-01

    Recognition memory for fixated regions from briefly viewed full-screen natural images is examined. Low-level image statistics reveal that observers fixated, on average (pooled across images and observers), image regions that possessed greater visual saliency than non-fixated regions, a finding that is robust across multiple fixation indices. Recognition-memory performance indicates that, of the fixation loci tested, observers were adept at recognising those with a particular profile of image statistics; visual saliency was found to be attenuated for unrecognised loci, despite that all regions were freely fixated. Furthermore, although elevated luminance was the local image statistic found to discriminate least between human and random image locations, it was the greatest predictor of recognition-memory performance, demonstrating a dissociation between image features that draw fixations and those that support visual memory. An analysis of corresponding eye movements indicates that image regions fixated via short-distance saccades enjoyed better recognition-memory performance, alluding to a focal rather than ambient mode of processing. Recognised image regions were more likely to have originated from areas evaluated (a posteriori) to have higher fixation density, a numerical metric of local interest. Surprisingly, memory for image regions fixated later in the viewing period exhibited no recency advantage, despite (typically) also being longer in duration, a finding for which a number of explanations are posited.

  6. Affective Computing used in an imaging interaction paradigm

    DEFF Research Database (Denmark)

    Schultz, Nette

    2003-01-01

    This paper combines affective computing with an imaging interaction paradigm. An imaging interaction paradigm means that human and computer communicates primarily by images. Images evoke emotions in humans, so the computer must be able to behave emotionally intelligent. An affective image selection...

  7. A self-teaching image processing and voice-recognition-based, intelligent and interactive system to educate visually impaired children

    Science.gov (United States)

    Iqbal, Asim; Farooq, Umar; Mahmood, Hassan; Asad, Muhammad Usman; Khan, Akrama; Atiq, Hafiz Muhammad

    2010-02-01

    A self teaching image processing and voice recognition based system is developed to educate visually impaired children, chiefly in their primary education. System comprises of a computer, a vision camera, an ear speaker and a microphone. Camera, attached with the computer system is mounted on the ceiling opposite (on the required angle) to the desk on which the book is placed. Sample images and voices in the form of instructions and commands of English, Urdu alphabets, Numeric Digits, Operators and Shapes are already stored in the database. A blind child first reads the embossed character (object) with the help of fingers than he speaks the answer, name of the character, shape etc into the microphone. With the voice command of a blind child received by the microphone, image is taken by the camera which is processed by MATLAB® program developed with the help of Image Acquisition and Image processing toolbox and generates a response or required set of instructions to child via ear speaker, resulting in self education of a visually impaired child. Speech recognition program is also developed in MATLAB® with the help of Data Acquisition and Signal Processing toolbox which records and process the command of the blind child.

  8. Computational surgery and dual training computing, robotics and imaging

    CERN Document Server

    Bass, Barbara; Berceli, Scott; Collet, Christophe; Cerveri, Pietro

    2014-01-01

    This critical volume focuses on the use of medical imaging, medical robotics, simulation, and information technology in surgery. It offers a road map for computational surgery success,  discusses the computer-assisted management of disease and surgery, and provides a rational for image processing and diagnostic. This book also presents some advances on image-driven intervention and robotics, as well as evaluates models and simulations for a broad spectrum of cancers as well as cardiovascular, neurological, and bone diseases. Training and performance analysis in surgery assisted by robotic systems is also covered. This book also: ·         Provides a comprehensive overview of the use of computational surgery and disease management ·         Discusses the design and use of medical robotic tools for orthopedic surgery, endoscopic surgery, and prostate surgery ·         Provides practical examples and case studies in the areas of image processing, virtual surgery, and simulation traini...

  9. Computer-Aided Authoring of Programmed Instruction for Teaching Symbol Recognition. Final Report.

    Science.gov (United States)

    Braby, Richard; And Others

    This description of AUTHOR, a computer program for the automated authoring of programmed texts designed to teach symbol recognition, includes discussions of the learning strategies incorporated in the design of the instructional materials, hardware description and the algorithm for the software, and current and future developments. Appendices…

  10. Dense image correspondences for computer vision

    CERN Document Server

    Liu, Ce

    2016-01-01

    This book describes the fundamental building-block of many new computer vision systems: dense and robust correspondence estimation. Dense correspondence estimation techniques are now successfully being used to solve a wide range of computer vision problems, very different from the traditional applications such techniques were originally developed to solve. This book introduces the techniques used for establishing correspondences between challenging image pairs, the novel features used to make these techniques robust, and the many problems dense correspondences are now being used to solve. The book provides information to anyone attempting to utilize dense correspondences in order to solve new or existing computer vision problems. The editors describe how to solve many computer vision problems by using dense correspondence estimation. Finally, it surveys resources, code, and data necessary for expediting the development of effective correspondence-based computer vision systems.   ·         Provides i...

  11. Image Quality Enhancement Using the Direction and Thickness of Vein Lines for Finger-Vein Recognition

    Directory of Open Access Journals (Sweden)

    Young Ho Park

    2012-10-01

    Full Text Available On the basis of the increased emphasis placed on the protection of privacy, biometric recognition systems using physical or behavioural characteristics such as fingerprints, facial characteristics, iris and finger-vein patterns or the voice have been introduced in applications including door access control, personal certification, Internet banking and ATM machines. Among these, finger-vein recognition is advantageous in that it involves the use of inexpensive and small devices that are difficult to counterfeit. In general, finger-vein recognition systems capture images by using near infrared (NIR illumination in conjunction with a camera. However, such systems can face operational difficulties, since the scattering of light from the skin can make capturing a clear image difficult. To solve this problem, we proposed new image quality enhancement method that measures the direction and thickness of vein lines. This effort represents novel research in four respects. First, since vein lines are detected in input images based on eight directional profiles of a grey image instead of binarized images, the detection error owing to the non-uniform illumination of the finger area can be reduced. Second, our method adaptively determines a Gabor filter for the optimal direction and width on the basis of the estimated direction and thickness of a detected vein line. Third, by applying this optimized Gabor filter, a clear vein image can be obtained. Finally, the further processing of the morphological operation is applied in the Gabor filtered image and the resulting image is combined with the original one, through which finger-vein image of a higher quality is obtained. Experimental results from application of our proposed image enhancement method show that the equal error rate (EER of finger-vein recognition decreases to approximately 0.4% in the case of a local binary pattern-based recognition and to approximately 0.3% in the case of a wavelet transform

  12. Patient Dose From Megavoltage Computed Tomography Imaging

    International Nuclear Information System (INIS)

    Shah, Amish P.; Langen, Katja M.; Ruchala, Kenneth J.; Cox, Andrea; Kupelian, Patrick A.; Meeks, Sanford L.

    2008-01-01

    Purpose: Megavoltage computed tomography (MVCT) can be used daily for imaging with a helical tomotherapy unit for patient alignment before treatment delivery. The purpose of this investigation was to show that the MVCT dose can be computed in phantoms, and further, that the dose can be reported for actual patients from MVCT on a helical tomotherapy unit. Methods and Materials: An MVCT beam model was commissioned and verified through a series of absorbed dose measurements in phantoms. This model was then used to retrospectively calculate the imaging doses to the patients. The MVCT dose was computed for five clinical cases: prostate, breast, head/neck, lung, and craniospinal axis. Results: Validation measurements in phantoms verified that the computed dose can be reported to within 5% of the measured dose delivered at the helical tomotherapy unit. The imaging dose scaled inversely with changes to the CT pitch. Relative to a normal pitch of 2.0, the organ dose can be scaled by 0.67 and 2.0 for scans done with a pitch of 3.0 and 1.0, respectively. Typical doses were in the range of 1.0-2.0 cGy, if imaged with a normal pitch. The maximal organ dose calculated was 3.6 cGy in the neck region of the craniospinal patient, if imaged with a pitch of 1.0. Conclusion: Calculation of the MVCT dose has shown that the typical imaging dose is approximately 1.5 cGy per image. The uniform MVCT dose delivered using helical tomotherapy is greatest when the anatomic thickness is the smallest and the pitch is set to the lowest value

  13. Mouse brain imaging using photoacoustic computed tomography

    Science.gov (United States)

    Lou, Yang; Xia, Jun; Wang, Lihong V.

    2014-03-01

    Photoacoustic computed tomography (PACT) provides structural and functional information when used in small animal brain imaging. Acoustic distortion caused by bone structures largely limits the deep brain image quality. In our work, we present ex vivo PACT images of freshly excised mouse brain, intending that can serve as a gold standard for future PACT in vivo studies on small animal brain imaging. Our results show that structures such as the striatum, hippocampus, ventricles, and cerebellum can be clearly di erentiated. An artery feature called the Circle of Willis, located at the bottom of the brain, can also be seen. These results indicate that if acoustic distortion can be accurately accounted for, PACT should be able to image the entire mouse brain with rich structural information.

  14. Learning through hand- or typewriting influences visual recognition of new graphic shapes: behavioral and functional imaging evidence.

    Science.gov (United States)

    Longcamp, Marieke; Boucard, Céline; Gilhodes, Jean-Claude; Anton, Jean-Luc; Roth, Muriel; Nazarian, Bruno; Velay, Jean-Luc

    2008-05-01

    Fast and accurate visual recognition of single characters is crucial for efficient reading. We explored the possible contribution of writing memory to character recognition processes. We evaluated the ability of adults to discriminate new characters from their mirror images after being taught how to produce the characters either by traditional pen-and-paper writing or with a computer keyboard. After training, we found stronger and longer lasting (several weeks) facilitation in recognizing the orientation of characters that had been written by hand compared to those typed. Functional magnetic resonance imaging recordings indicated that the response mode during learning is associated with distinct pathways during recognition of graphic shapes. Greater activity related to handwriting learning and normal letter identification was observed in several brain regions known to be involved in the execution, imagery, and observation of actions, in particular, the left Broca's area and bilateral inferior parietal lobules. Taken together, these results provide strong arguments in favor of the view that the specific movements memorized when learning how to write participate in the visual recognition of graphic shapes and letters.

  15. Soil structure characterized using computed tomographic images

    Science.gov (United States)

    Zhanqi Cheng; Stephen H. Anderson; Clark J. Gantzer; J. W. Van Sambeek

    2003-01-01

    Fractal analysis of soil structure is a relatively new method for quantifying the effects of management systems on soil properties and quality. The objective of this work was to explore several methods of studying images to describe and quantify structure of soils under forest management. This research uses computed tomography and a topological method called Multiple...

  16. Parotid lymphomas - clinical and computed tomogrphic imaging ...

    African Journals Online (AJOL)

    Objective. To review the clinical presentation and computed tomography (CT) imaging characteristics of all parotid lymphomas diagnosed at the study institution over a 7-year period. Design. Retrospective chart review of parotid lymphomas diagnosed between 1997 and 2004. Subjects. A total of 121 patients with parotid ...

  17. Nuclear imaging using Fuji Computed Radiography

    International Nuclear Information System (INIS)

    Yodono, Hiraku; Tarusawa, Nobuko; Katto, Keiichi; Miyakawa, Takayoshi; Watanabe, Sadao; Shinozaki, Tatsuyo

    1988-01-01

    We studied the feasibility of the Fuji Computed Radiography system (FCR) in nuclear medicine. The basic principle of the system is the conversion of the X-ray energy pattern into digital signals utilizing scanning laser stimulated luminescence. A Rollo phantom filled with 12 mCi of Tc-99m pertechnetate was used in this study. In imaging by the FCR, a low energy high resolution parallel hole collimator for a gamma camera was placed over the phantom and photons through the collimator were stored on a single imaging plate (IP) or 3 IPs covered by the lead plate, 0.3 mm in thickness. For imaging, it took 30 minutes by a single IP and 20 minutes by 3 IPs with the lead plate respectively. Each image of the phantom by the FCR was compared with that of obtained by a gamma camera. The image by a single IP was inferior in quality than that of by a gamma camera. However using 3 IPs with the lead plate, same quality image as by a gamma camera was obtained. The image by 3 IPs is similar to that of by 3 IPs with the lead plate. Based on the results, we performed liver and lung imaging by FCR using 3 IPs. The imaging time is twenty minutes. The images obtained with FCR are as good as the scinticamera image. However it has two major flawes in that the sensitivity is poor and the imaging time is long. Furthermore, at present this method can only be employed for static imaging. However we feel that future improvements in the FCR system will overcome these problems. (author)

  18. Computer-aided classification of lung nodules on computed tomography images via deep learning technique

    Directory of Open Access Journals (Sweden)

    Hua KL

    2015-08-01

    Full Text Available Kai-Lung Hua,1 Che-Hao Hsu,1 Shintami Chusnul Hidayati,1 Wen-Huang Cheng,2 Yu-Jen Chen3 1Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, 2Research Center for Information Technology Innovation, Academia Sinica, 3Department of Radiation Oncology, MacKay Memorial Hospital, Taipei, Taiwan Abstract: Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain. Keywords: nodule classification, deep learning, deep belief network, convolutional neural network

  19. NutriNet: A Deep Learning Food and Drink Image Recognition System for Dietary Assessment.

    Science.gov (United States)

    Mezgec, Simon; Koroušić Seljak, Barbara

    2017-06-27

    Automatic food image recognition systems are alleviating the process of food-intake estimation and dietary assessment. However, due to the nature of food images, their recognition is a particularly challenging task, which is why traditional approaches in the field have achieved a low classification accuracy. Deep neural networks have outperformed such solutions, and we present a novel approach to the problem of food and drink image detection and recognition that uses a newly-defined deep convolutional neural network architecture, called NutriNet. This architecture was tuned on a recognition dataset containing 225,953 512 × 512 pixel images of 520 different food and drink items from a broad spectrum of food groups, on which we achieved a classification accuracy of 86 . 72 % , along with an accuracy of 94 . 47 % on a detection dataset containing 130 , 517 images. We also performed a real-world test on a dataset of self-acquired images, combined with images from Parkinson's disease patients, all taken using a smartphone camera, achieving a top-five accuracy of 55 % , which is an encouraging result for real-world images. Additionally, we tested NutriNet on the University of Milano-Bicocca 2016 (UNIMIB2016) food image dataset, on which we improved upon the provided baseline recognition result. An online training component was implemented to continually fine-tune the food and drink recognition model on new images. The model is being used in practice as part of a mobile app for the dietary assessment of Parkinson's disease patients.

  20. Supervised dimensionality reduction and contextual pattern recognition in medical image processing

    OpenAIRE

    Loog, Marco

    2004-01-01

    The past few years have witnessed a significant increase in the number of supervised methods employed in diverse image processing tasks. Especially in medical image analysis the use of, for example, supervised shape and appearance modelling has increased considerably and has proven to be successful. This thesis focuses on applying supervised pattern recognition methods in medical image processing. We consider a local, pixel-based approach in which image segmentation, regression, and filtering...

  1. Computed tomography with image intensifier: imaging and characterization of materials

    International Nuclear Information System (INIS)

    Rao, D.V.; Cesareo, R.; Brunetti, A.

    1999-01-01

    Computed tomographic images, nondestructive evaluation of materials of ceramics, electrical insulators, wood and other samples obtained, using a new tomographic system based on an image intensifier replacing earlier system (Cesareo et al., Nucl. Instr. and Meth. A 356 (1995) 573). It consists of a charge coupled device camera and an acquisition board. The charge coupled device and the acquisition board allows image processing, filtration and restoration. A reconstruction programme, written in PASCAL is able to give the reconstruction matrix of the linear attenuation coefficients, and simulates the matrix and the related tomography. The flux emitted by the tube is filtered using appropriate filters at chosen energy and reasonable monochromacy is achieved for all the images. The effect of collimators is also studied at various energies with filters and the optimum value is used for better image quality

  2. Pediatric computed tomographic angiography: imaging the cardiovascular system gently.

    Science.gov (United States)

    Hellinger, Jeffrey C; Pena, Andres; Poon, Michael; Chan, Frandics P; Epelman, Monica

    2010-03-01

    Whether congenital or acquired, timely recognition and management of disease is imperative, as hemodynamic alterations in blood flow, tissue perfusion, and cellular oxygenation can have profound effects on organ function, growth and development, and quality of life for the pediatric patient. Ensuring safe computed tomographic angiography (CTA) practice and "gentle" pediatric imaging requires the cardiovascular imager to have sound understanding of CTA advantages, limitations, and appropriate indications as well as strong working knowledge of acquisition principles and image post processing. From this vantage point, CTA can be used as a useful adjunct along with the other modalities. This article presents a summary of dose reduction CTA methodologies along with techniques the authors have employed in clinical practice to achieve low-dose and ultralow-dose exposure in pediatric CTA. CTA technical principles are discussed with an emphasis on the low-dose methodologies and safe contrast medium delivery strategies. Recommended parameters for currently available multidetector-row computed tomography scanners are summarized alongside recommended radiation and contrast medium parameters. In the second part of the article an overview of pediatric CTA clinical applications is presented, illustrating low-dose and ultra-low dose techniques, with an emphasis on the specific protocols. Copyright 2010 Elsevier Inc. All rights reserved.

  3. Visual computing scientific visualization and imaging systems

    CERN Document Server

    2014-01-01

    This volume aims to stimulate discussions on research involving the use of data and digital images as an understanding approach for analysis and visualization of phenomena and experiments. The emphasis is put not only on graphically representing data as a way of increasing its visual analysis, but also on the imaging systems which contribute greatly to the comprehension of real cases. Scientific Visualization and Imaging Systems encompass multidisciplinary areas, with applications in many knowledge fields such as Engineering, Medicine, Material Science, Physics, Geology, Geographic Information Systems, among others. This book is a selection of 13 revised and extended research papers presented in the International Conference on Advanced Computational Engineering and Experimenting -ACE-X conferences 2010 (Paris), 2011 (Algarve), 2012 (Istanbul) and 2013 (Madrid). The examples were particularly chosen from materials research, medical applications, general concepts applied in simulations and image analysis and ot...

  4. A microprocessor-based single board computer for high energy physics event pattern recognition

    International Nuclear Information System (INIS)

    Bernstein, H.; Gould, J.J.; Imossi, R.; Kopp, J.K.; Love, W.A.; Ozaki, S.; Platner, E.D.; Kramer, M.A.

    1981-01-01

    A single board MC 68000 based computer has been assembled and bench marked against the CDC 7600 running portions of the pattern recognition code used at the MPS. This computer has a floating coprocessor to achieve throughputs equivalent to several percent that of the 7600. A major part of this work was the construction of a FORTRAN compiler including assembler, linker and library. The intention of this work is to assemble a large number of these single board computers in a parallel FASTBUS environment to act as an on-line and off-line filter for the raw data from MPS II and ISABELLE experiments. (orig.)

  5. MoCog1: A computer simulation of recognition-primed human decision making

    Science.gov (United States)

    Gevarter, William B.

    1991-01-01

    The results of the first stage of a research effort to develop a 'sophisticated' computer model of human cognitive behavior are described. Most human decision making is an experience-based, relatively straight-forward, largely automatic response to internal goals and drives, utilizing cues and opportunities perceived from the current environment. The development of the architecture and computer program (MoCog1) associated with such 'recognition-primed' decision making is discussed. The resultant computer program was successfully utilized as a vehicle to simulate earlier findings that relate how an individual's implicit theories orient the individual toward particular goals, with resultant cognitions, affects, and behavior in response to their environment.

  6. Microprocessor-based single board computer for high energy physics event pattern recognition

    International Nuclear Information System (INIS)

    Bernstein, H.; Gould, J.J.; Imossi, R.; Kopp, J.K.; Love, W.A.; Ozaki, S.; Platner, E.D.; Kramer, M.A.

    1981-01-01

    A single board MC 68000 based computer has been assembled and bench marked against the CDC 7600 running portions of the pattern recognition code used at the MPS. This computer has a floating coprocessor to achieve throughputs equivalent to several percent that of the 7600. A major part of this work was the construction of a FORTRAN compiler including assembler, linker and library. The intention of this work is to assemble a large number of these single board computers in a parallel FASTBUS environment to act as an on-line and off-line filter for the raw data from MPS II and ISABELLE experiments

  7. Pattern recognition with machine learning on optical microscopy images of typical metallurgical microstructures.

    Science.gov (United States)

    Bulgarevich, Dmitry S; Tsukamoto, Susumu; Kasuya, Tadashi; Demura, Masahiko; Watanabe, Makoto

    2018-02-01

    For advanced materials characterization, a novel and extremely effective approach of pattern recognition in optical microscopic images of steels is demonstrated. It is based on fast Random Forest statistical algorithm of machine learning for reliable and automated segmentation of typical steel microstructures. Their percentage and location areas excellently agreed between machine learning and manual examination results. The accurate microstructure pattern recognition/segmentation technique in combination with other suitable mathematical methods of image processing and analysis can help to handle the large volumes of image data in a short time for quality control and for the quest of new steels with desirable properties.

  8. Close the Loop: Joint Blind Image Restoration and Recognition with Sparse Representation Prior

    Science.gov (United States)

    2011-11-06

    x − y‖22 + λ‖D ⊤x‖1 + γ‖k‖2, (5) where D⊤ is some sparse transformation (such as Wavelet , Curvelet, among others [1]) or sparsity inducing operator...on joint im- age restoration and recognition for face images under var - ious blind degradation settings. In our JRR algorithm, the tasks of image

  9. Automatic detection and recognition of multiple macular lesions in retinal optical coherence tomography images with multi-instance multilabel learning.

    Science.gov (United States)

    Fang, Leyuan; Yang, Liumao; Li, Shutao; Rabbani, Hossein; Liu, Zhimin; Peng, Qinghua; Chen, Xiangdong

    2017-06-01

    Detection and recognition of macular lesions in optical coherence tomography (OCT) are very important for retinal diseases diagnosis and treatment. As one kind of retinal disease (e.g., diabetic retinopathy) may contain multiple lesions (e.g., edema, exudates, and microaneurysms) and eye patients may suffer from multiple retinal diseases, multiple lesions often coexist within one retinal image. Therefore, one single-lesion-based detector may not support the diagnosis of clinical eye diseases. To address this issue, we propose a multi-instance multilabel-based lesions recognition (MIML-LR) method for the simultaneous detection and recognition of multiple lesions. The proposed MIML-LR method consists of the following steps: (1) segment the regions of interest (ROIs) for different lesions, (2) compute descriptive instances (features) for each lesion region, (3) construct multilabel detectors, and (4) recognize each ROI with the detectors. The proposed MIML-LR method was tested on 823 clinically labeled OCT images with normal macular and macular with three common lesions: epiretinal membrane, edema, and drusen. For each input OCT image, our MIML-LR method can automatically identify the number of lesions and assign the class labels, achieving the average accuracy of 88.72% for the cases with multiple lesions, which better assists macular disease diagnosis and treatment.

  10. Document image recognition and retrieval: where are we?

    Science.gov (United States)

    Garris, Michael D.

    1999-01-01

    This paper discusses survey data collected as a result of planning a project to evaluate document recognition and information retrieval technologies. In the process of establishing the project, a Request for Comment (RFC) was widely distributed throughout the document recognition and information retrieval research and development (R&D) communities, and based on the responses, the project was discontinued. The purpose of this paper is to present `real' data collected from the R&D communities in regards to a `real' project, so that we may all form our own conclusions about where we are, where we are heading, and how we are going to get there. Background on the project is provided and responses to the RFC are summarized.

  11. Early Recognition of Chronic Traumatic Encephalopathy Through FDDNP PET Imaging

    Science.gov (United States)

    2017-10-01

    Charles Bernick, MD, MPH CONTRACTING ORGANIZATION: Cleveland Clinic Foundation 9500 Euclid Ave Cleveland, Ohio 44195 REPORT DATE: October 2017 TYPE OF...ABOVE ADDRESS. 1. REPORT DATE October 2017 2. REPORT TYPE Annual 3. DATES COVERED 30Sept2016 - 29Sept2017 4. TITLE AND SUBTITLE Early Recognition of... Foundation 9500 Euclid Ave Cleveland, Ohio 44195 AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER 9500 Euclid Avenue Cleveland, Ohio 9

  12. Computed image analysis of neutron radiographs

    International Nuclear Information System (INIS)

    Dinca, M.; Anghel, E.; Preda, M.; Pavelescu, M.

    2008-01-01

    Similar with X-radiography, using neutron like penetrating particle, there is in practice a nondestructive technique named neutron radiology. When the registration of information is done on a film with the help of a conversion foil (with high cross section for neutrons) that emits secondary radiation (β,γ) that creates a latent image, the technique is named neutron radiography. A radiographic industrial film that contains the image of the internal structure of an object, obtained by neutron radiography, must be subsequently analyzed to obtain qualitative and quantitative information about the structural integrity of that object. There is possible to do a computed analysis of a film using a facility with next main components: an illuminator for film, a CCD video camera and a computer (PC) with suitable software. The qualitative analysis intends to put in evidence possibly anomalies of the structure due to manufacturing processes or induced by working processes (for example, the irradiation activity in the case of the nuclear fuel). The quantitative determination is based on measurements of some image parameters: dimensions, optical densities. The illuminator has been built specially to perform this application but can be used for simple visual observation. The illuminated area is 9x40 cm. The frame of the system is a comparer of Abbe Carl Zeiss Jena type, which has been adapted to achieve this application. The video camera assures the capture of image that is stored and processed by computer. A special program SIMAG-NG has been developed at INR Pitesti that beside of the program SMTV II of the special acquisition module SM 5010 can analyze the images of a film. The major application of the system was the quantitative analysis of a film that contains the images of some nuclear fuel pins beside a dimensional standard. The system was used to measure the length of the pellets of the TRIGA nuclear fuel. (authors)

  13. Computer vision for image-based transcriptomics.

    Science.gov (United States)

    Stoeger, Thomas; Battich, Nico; Herrmann, Markus D; Yakimovich, Yauhen; Pelkmans, Lucas

    2015-09-01

    Single-cell transcriptomics has recently emerged as one of the most promising tools for understanding the diversity of the transcriptome among single cells. Image-based transcriptomics is unique compared to other methods as it does not require conversion of RNA to cDNA prior to signal amplification and transcript quantification. Thus, its efficiency in transcript detection is unmatched by other methods. In addition, image-based transcriptomics allows the study of the spatial organization of the transcriptome in single cells at single-molecule, and, when combined with superresolution microscopy, nanometer resolution. However, in order to unlock the full power of image-based transcriptomics, robust computer vision of single molecules and cells is required. Here, we shortly discuss the setup of the experimental pipeline for image-based transcriptomics, and then describe in detail the algorithms that we developed to extract, at high-throughput, robust multivariate feature sets of transcript molecule abundance, localization and patterning in tens of thousands of single cells across the transcriptome. These computer vision algorithms and pipelines can be downloaded from: https://github.com/pelkmanslab/ImageBasedTranscriptomics. Copyright © 2015. Published by Elsevier Inc.

  14. Pancreatitis: computed tomography and magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, P.J.A.; Sheridan, M.B. [Dept. of Clinical Radiology, St. James' s University Hospital, Leeds (United Kingdom)

    2000-03-01

    The value of CT in management of severe acute pancreatitis is well established. Some, but not all, experimental studies suggest a detrimental effect of intravenous iodinated contrast agents in acute pancreatitis, but although initial clinical data tends to support this, the positive advantages of enhanced CT outweigh the possible risks. Magnetic resonance imaging has been shown to be as effective as CT in demonstrating the presence and extent of pancreatic necrosis and fluid collections, and probably superior in indicating the suitability of such collections for percutaneous drainage. Image-guided intervention remains a key approach in the management of severely ill patients, and the indications, techniques and results of radiological intervention are reviewed herein. Both CT and MRI can be used to diagnose advanced chronic pancreatitis, with the recent addition of MRCP as a viable alternative to diagnostic endoscopic retrograde cholangiopancreatography (ERCP). Both MRCP and CT/MR imaging of the pancreatic parenchyma still have limitations in the recognition of the earliest changes of chronic pancreatitis - for which ERCP and tests of pancreatic function remain more sensitive - but the clinical significance of these minor changes remains contentious. (orig.)

  15. Auto-associative segmentation for real-time object recognition in realistic outdoor images

    Science.gov (United States)

    Estevez, Leonardo W.; Kehtarnavaz, Nasser D.

    1998-04-01

    As digital signal processors (DSPs) become more advanced, many real-time recognition problems will be solved with completely integrated solutions. In this paper a methodology which is designed for today's DSP architectures and is capable of addressing applications in real-time color object recognition is presented. The methodology is integrated into a processing structure called raster scan video processing which requires a small amount of memory. The small amount of memory required enables the entire recognition system to be implemented on a single DSP. This auto-associative segmentation approach provides a means for desaturated color images to be segmented. The system is applied to the problem of stop sign recognition is realistically captured outdoor images.

  16. Proton computed tomography images with algebraic reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Bruzzi, M. [Physics and Astronomy Department, University of Florence, Florence (Italy); Civinini, C.; Scaringella, M. [INFN - Florence Division, Florence (Italy); Bonanno, D. [INFN - Catania Division, Catania (Italy); Brianzi, M. [INFN - Florence Division, Florence (Italy); Carpinelli, M. [INFN - Laboratori Nazionali del Sud, Catania (Italy); Chemistry and Pharmacy Department, University of Sassari, Sassari (Italy); Cirrone, G.A.P.; Cuttone, G. [INFN - Laboratori Nazionali del Sud, Catania (Italy); Presti, D. Lo [INFN - Catania Division, Catania (Italy); Physics and Astronomy Department, University of Catania, Catania (Italy); Maccioni, G. [INFN – Cagliari Division, Cagliari (Italy); Pallotta, S. [INFN - Florence Division, Florence (Italy); Department of Biomedical, Experimental and Clinical Sciences, University of Florence, Florence (Italy); SOD Fisica Medica, Azienda Ospedaliero-Universitaria Careggi, Firenze (Italy); Randazzo, N. [INFN - Catania Division, Catania (Italy); Romano, F. [INFN - Laboratori Nazionali del Sud, Catania (Italy); Sipala, V. [INFN - Laboratori Nazionali del Sud, Catania (Italy); Chemistry and Pharmacy Department, University of Sassari, Sassari (Italy); Talamonti, C. [INFN - Florence Division, Florence (Italy); Department of Biomedical, Experimental and Clinical Sciences, University of Florence, Florence (Italy); SOD Fisica Medica, Azienda Ospedaliero-Universitaria Careggi, Firenze (Italy); Vanzi, E. [Fisica Sanitaria, Azienda Ospedaliero-Universitaria Senese, Siena (Italy)

    2017-02-11

    A prototype of proton Computed Tomography (pCT) system for hadron-therapy has been manufactured and tested in a 175 MeV proton beam with a non-homogeneous phantom designed to simulate high-contrast material. BI-SART reconstruction algorithms have been implemented with GPU parallelism, taking into account of most likely paths of protons in matter. Reconstructed tomography images with density resolutions r.m.s. down to ~1% and spatial resolutions <1 mm, achieved within processing times of ~15′ for a 512×512 pixels image prove that this technique will be beneficial if used instead of X-CT in hadron-therapy.

  17. Colour vision and computer-generated images

    International Nuclear Information System (INIS)

    Ramek, Michael

    2010-01-01

    Colour vision deficiencies affect approximately 8% of the male and approximately 0.4% of the female population. In this work, it is demonstrated that computer generated images oftentimes pose unnecessary problems for colour deficient viewers. Three examples, the visualization of molecular structures, graphs of mathematical functions, and colour coded images from numerical data are used to identify problematic colour combinations: red/black, green/black, red/yellow, yellow/white, fuchsia/white, and aqua/white. Alternatives for these combinations are discussed.

  18. FUZZY BASED IMAGE DIMENSIONALITY REDUCTION USING SHAPE PRIMITIVES FOR EFFICIENT FACE RECOGNITION

    Directory of Open Access Journals (Sweden)

    P. Chandra Sekhar Reddy

    2013-11-01

    Full Text Available Today face recognition capability of the human visual system plays a significant role in day to day life due to numerous important applications for automatic face recognition. One of the problems with the recent image classification and recognition approaches are they have to extract features on the entire image and on the large grey level range of the image. The present paper overcomes this by deriving an approach that reduces the dimensionality of the image using Shape primitives and reducing the grey level range by using a fuzzy logic while preserving the significant attributes of the texture. The present paper proposed an Image Dimensionality Reduction using shape Primitives (IDRSP model for efficient face recognition. Fuzzy logic is applied on IDRSP facial model to reduce the grey level range from 0 to 4. This makes the proposed fuzzy based IDRSP (FIDRSP model suitable to Grey level co-occurrence matrices. The proposed FIDRSP model with GLCM features are compared with existing face recognition algorithm. The results indicate the efficacy of the proposed method.

  19. Diagnostic Imaging of the Lower Respiratory Tract in Neonatal Foals: Radiography and Computed Tomography.

    Science.gov (United States)

    Lascola, Kara M; Joslyn, Stephen

    2015-12-01

    Diagnostic imaging plays an essential role in the diagnosis and monitoring of lower respiratory disease in neonatal foals. Radiography is most widely available to equine practitioners and is the primary modality that has been used for the characterization of respiratory disease in foals. Computed tomography imaging, although still limited in availability to the general practitioner, offers advantages over radiography and has been used diagnostically in neonatal foals with respiratory disease. Recognition of appropriate imaging protocols and patient-associated artifacts is critical for accurate image interpretation regardless of the modality used. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Single-pixel non-imaging object recognition by means of Fourier spectrum acquisition

    Science.gov (United States)

    Chen, Huichao; Shi, Jianhong; Liu, Xialin; Niu, Zhouzhou; Zeng, Guihua

    2018-04-01

    Single-pixel imaging has emerged over recent years as a novel imaging technique, which has significant application prospects. In this paper, we propose and experimentally demonstrate a scheme that can achieve single-pixel non-imaging object recognition by acquiring the Fourier spectrum. In an experiment, a four-step phase-shifting sinusoid illumination light is used to irradiate the object image, the value of the light intensity is measured with a single-pixel detection unit, and the Fourier coefficients of the object image are obtained by a differential measurement. The Fourier coefficients are first cast into binary numbers to obtain the hash value. We propose a new method of perceptual hashing algorithm, which is combined with a discrete Fourier transform to calculate the hash value. The hash distance is obtained by calculating the difference of the hash value between the object image and the contrast images. By setting an appropriate threshold, the object image can be quickly and accurately recognized. The proposed scheme realizes single-pixel non-imaging perceptual hashing object recognition by using fewer measurements. Our result might open a new path for realizing object recognition with non-imaging.

  1. Study on municipal road cracking and surface deformation based on image recognition

    Science.gov (United States)

    Yuan, Haitao; Wang, Shuai; Tan, Jizong

    2017-05-01

    In recent years, the digital image recognition technology of concrete structure cracks and deformation of binocular vision technology detection of civil engineering structure have made substantial development. As a result, people's understanding of the road engineering structure cracking and surface deformation recognition gives rise to a new situation. For the research on digital image concrete structure cracking and masonry structure surface deformation recognition technology, the key is to break through in the method, and to improve the traditional recognition technology and mode. Only in this way can we continuously improve the security level of the highway, to adapt to the new requirements of the development of new urbanization and modernization. This thesis focuses on and systematically analyzes the digital image road engineering structure cracking and key technologies of surface deformation recognition and its engineering applications. In addition, we change the concrete structure cracking and masonry structure surface deformation recognition pattern, and realize the breakthrough and innovation of the road structure safety testing means and methods.

  2. Face recognition in simulated prosthetic vision: face detection-based image processing strategies.

    Science.gov (United States)

    Wang, Jing; Wu, Xiaobei; Lu, Yanyu; Wu, Hao; Kan, Han; Chai, Xinyu

    2014-08-01

    Given the limited visual percepts elicited by current prosthetic devices, it is essential to optimize image content in order to assist implant wearers to achieve better performance of visual tasks. This study focuses on recognition of familiar faces using simulated prosthetic vision. Combined with region-of-interest (ROI) magnification, three face extraction strategies based on a face detection technique were used: the Viola-Jones face region, the statistical face region (SFR) and the matting face region. These strategies significantly enhanced recognition performance compared to directly lowering resolution (DLR) with Gaussian dots. The inclusion of certain external features, such as hairstyle, was beneficial for face recognition. Given the high recognition accuracy achieved and applicable processing speed, SFR-ROI was the preferred strategy. DLR processing resulted in significant face gender recognition differences (i.e. females were more easily recognized than males), but these differences were not apparent with other strategies. Face detection-based image processing strategies improved visual perception by highlighting useful information. Their use is advisable for face recognition when using low-resolution prosthetic vision. These results provide information for the continued design of image processing modules for use in visual prosthetics, thus maximizing the benefits for future prosthesis wearers.

  3. Modes of Visual Recognition and Perceptually Relevant Sketch-based Coding for Images

    Science.gov (United States)

    Jobson, Daniel J.

    1991-01-01

    A review of visual recognition studies is used to define two levels of information requirements. These two levels are related to two primary subdivisions of the spatial frequency domain of images and reflect two distinct different physical properties of arbitrary scenes. In particular, pathologies in recognition due to cerebral dysfunction point to a more complete split into two major types of processing: high spatial frequency edge based recognition vs. low spatial frequency lightness (and color) based recognition. The former is more central and general while the latter is more specific and is necessary for certain special tasks. The two modes of recognition can also be distinguished on the basis of physical scene properties: the highly localized edges associated with reflectance and sharp topographic transitions vs. smooth topographic undulation. The extreme case of heavily abstracted images is pursued to gain an understanding of the minimal information required to support both modes of recognition. Here the intention is to define the semantic core of transmission. This central core of processing can then be fleshed out with additional image information and coding and rendering techniques.

  4. Eliminating chromatic aberration of lens and recognition of thermal images with artificial intelligence applications

    Science.gov (United States)

    Fang, Yi-Chin; Wu, Bo-Wen; Lin, Wei-Tang; Jon, Jen-Liung

    2007-11-01

    Resolution and color are two main directions for measuring optical digital image, but it will be a hard work to integral improve the image quality of optical system, because there are many limits such as size, materials and environment of optical system design. Therefore, it is important to let blurred images as aberrations and noises or due to the characteristics of human vision as far distance and small targets to raise the capability of image recognition with artificial intelligence such as genetic algorithm and neural network in the condition that decreasing color aberration of optical system and not to increase complex calculation in the image processes. This study could achieve the goal of integral, economically and effectively to improve recognition and classification in low quality image from optical system and environment.

  5. A pattern recognition system for locating small volvanoes in Magellan SAR images of Venus

    Science.gov (United States)

    Burl, M. C.; Fayyad, U. M.; Smyth, P.; Aubele, J. C.; Crumpler, L. S.

    1993-01-01

    The Magellan data set constitutes an example of the large volumes of data that today's instruments can collect, providing more detail of Venus than was previously available from Pioneer Venus, Venera 15/16, or ground-based radar observations put together. However, data analysis technology has not kept pace with data collection and storage technology. Due to the sheer size of the data, complete and comprehensive scientific analysis of such large volumes of image data is no longer feasible without the use of computational aids. Our progress towards developing a pattern recognition system for aiding in the detection and cataloging of small-scale natural features in large collections of images is reported. Combining classical image processing, machine learning, and a graphical user interface, the detection of the 'small-shield' volcanoes (less than 15km in diameter) that constitute the most abundant visible geologic feature in the more that 30,000 synthetic aperture radar (SAR) images of the surface of Venus are initially targeted. Our eventual goal is to provide a general, trainable tool for locating small-scale features where scientists specify what to look for simply by providing examples and attributes of interest to measure. This contrasts with the traditional approach of developing problem specific programs for detecting Specific patterns. The approach and initial results in the specific context of locating small volcanoes is reported. It is estimated, based on extrapolating from previous studies and knowledge of the underlying geologic processes, that there should be on the order of 10(exp 5) to 10(exp 6) of these volcanoes visible in the Magellan data. Identifying and studying these volcanoes is fundamental to a proper understanding of the geologic evolution of Venus. However, locating and parameterizing them in a manual manner is forbiddingly time-consuming. Hence, the development of techniques to partially automate this task were undertaken. The primary

  6. Prior image constrained image reconstruction in emerging computed tomography applications

    Science.gov (United States)

    Brunner, Stephen T.

    Advances have been made in computed tomography (CT), especially in the past five years, by incorporating prior images into the image reconstruction process. In this dissertation, we investigate prior image constrained image reconstruction in three emerging CT applications: dual-energy CT, multi-energy photon-counting CT, and cone-beam CT in image-guided radiation therapy. First, we investigate the application of Prior Image Constrained Compressed Sensing (PICCS) in dual-energy CT, which has been called "one of the hottest research areas in CT." Phantom and animal studies are conducted using a state-of-the-art 64-slice GE Discovery 750 HD CT scanner to investigate the extent to which PICCS can enable radiation dose reduction in material density and virtual monochromatic imaging. Second, we extend the application of PICCS from dual-energy CT to multi-energy photon-counting CT, which has been called "one of the 12 topics in CT to be critical in the next decade." Numerical simulations are conducted to generate multiple energy bin images for a photon-counting CT acquisition and to investigate the extent to which PICCS can enable radiation dose efficiency improvement. Third, we investigate the performance of a newly proposed prior image constrained scatter correction technique to correct scatter-induced shading artifacts in cone-beam CT, which, when used in image-guided radiation therapy procedures, can assist in patient localization, and potentially, dose verification and adaptive radiation therapy. Phantom studies are conducted using a Varian 2100 EX system with an on-board imager to investigate the extent to which the prior image constrained scatter correction technique can mitigate scatter-induced shading artifacts in cone-beam CT. Results show that these prior image constrained image reconstruction techniques can reduce radiation dose in dual-energy CT by 50% in phantom and animal studies in material density and virtual monochromatic imaging, can lead to radiation

  7. Data management in pattern recognition and image processing systems

    Science.gov (United States)

    Zobrist, A. L.; Bryant, N. A.

    1976-01-01

    Data management considerations are important to any system which handles large volumes of data or where the manipulation of data is technically sophisticated. A particular problem is the introduction of image-formatted files into the mainstream of data processing application. This report describes a comprehensive system for the manipulation of image, tabular, and graphical data sets which involve conversions between the various data types. A key characteristic is the use of image processing technology to accomplish data management tasks. Because of this, the term 'image-based information system' has been adopted.

  8. Joint Segmentation and Recognition of Categorized Objects from Noisy Web Image Collection.

    Science.gov (United States)

    Wang, Le; Hua, Gang; Xue, Jianru; Gao, Zhanning; Zheng, Nanning

    2014-07-14

    The segmentation of categorized objects addresses the problem of joint segmentation of a single category of object across a collection of images, where categorized objects are referred to objects in the same category. Most existing methods of segmentation of categorized objects made the assumption that all images in the given image collection contain the target object. In other words, the given image collection is noise free. Therefore, they may not work well when there are some noisy images which are not in the same category, such as those image collections gathered by a text query from modern image search engines. To overcome this limitation, we propose a method for automatic segmentation and recognition of categorized objects from noisy Web image collections. This is achieved by cotraining an automatic object segmentation algorithm that operates directly on a collection of images, and an object category recognition algorithm that identifies which images contain the target object. The object segmentation algorithm is trained on a subset of images from the given image collection which are recognized to contain the target object with high confidence, while training the object category recognition model is guided by the intermediate segmentation results obtained from the object segmentation algorithm. This way, our co-training algorithm automatically identifies the set of true positives in the noisy Web image collection, and simultaneously extracts the target objects from all the identified images. Extensive experiments validated the efficacy of our proposed approach on four datasets: 1) the Weizmann horse dataset, 2) the MSRC object category dataset, 3) the iCoseg dataset, and 4) a new 30-categories dataset including 15,634 Web images with both hand-annotated category labels and ground truth segmentation labels. It is shown that our method compares favorably with the state-of-the-art, and has the ability to deal with noisy image collections.

  9. Using speech recognition to enhance the Tongue Drive System functionality in computer access.

    Science.gov (United States)

    Huo, Xueliang; Ghovanloo, Maysam

    2011-01-01

    Tongue Drive System (TDS) is a wireless tongue operated assistive technology (AT), which can enable people with severe physical disabilities to access computers and drive powered wheelchairs using their volitional tongue movements. TDS offers six discrete commands, simultaneously available to the users, for pointing and typing as a substitute for mouse and keyboard in computer access, respectively. To enhance the TDS performance in typing, we have added a microphone, an audio codec, and a wireless audio link to its readily available 3-axial magnetic sensor array, and combined it with a commercially available speech recognition software, the Dragon Naturally Speaking, which is regarded as one of the most efficient ways for text entry. Our preliminary evaluations indicate that the combined TDS and speech recognition technologies can provide end users with significantly higher performance than using each technology alone, particularly in completing tasks that require both pointing and text entry, such as web surfing.

  10. Pattern recognition algorithms for data mining scalability, knowledge discovery and soft granular computing

    CERN Document Server

    Pal, Sankar K

    2004-01-01

    Pattern Recognition Algorithms for Data Mining addresses different pattern recognition (PR) tasks in a unified framework with both theoretical and experimental results. Tasks covered include data condensation, feature selection, case generation, clustering/classification, and rule generation and evaluation. This volume presents various theories, methodologies, and algorithms, using both classical approaches and hybrid paradigms. The authors emphasize large datasets with overlapping, intractable, or nonlinear boundary classes, and datasets that demonstrate granular computing in soft frameworks.Organized into eight chapters, the book begins with an introduction to PR, data mining, and knowledge discovery concepts. The authors analyze the tasks of multi-scale data condensation and dimensionality reduction, then explore the problem of learning with support vector machine (SVM). They conclude by highlighting the significance of granular computing for different mining tasks in a soft paradigm.

  11. Fluorescence microscope by using computational ghost imaging

    Directory of Open Access Journals (Sweden)

    Mizutani Yasuhiro

    2015-01-01

    Full Text Available We propose a fluorescence microscope by using the computational Ghost imaging (CGI for observing a living cell for a long duration over an hour. There is a problem for observing a cell about light-induced bleaching fora ling-term observation.Toover come the problem, we focused on an advantageof sensitivityof the CGI as second order colleration for an imaging with weak intensity excitation light. Setting for the CGI, a DMD projector was installed at an eye-piece part of a microscope and fluorescent light was detected using a bucket detectorofa photo-multiplier tube.Asaresults,wehaveshownthe imagingadvantageoftheCGI under weak light intensity, in addition, we have demonstrated to detect fluorescence images of biological samples for one day.

  12. CT Image Sequence Processing For Wood Defect Recognition

    Science.gov (United States)

    Dongping Zhu; R.W. Conners; Philip A. Araman

    1991-01-01

    The research reported in this paper explores a non-destructive testing application of x-ray computed tomography (CT) in the forest products industry. This application involves a computer vision system that uses CT to locate and identify internal defects in hardwood logs. The knowledge of log defects is critical in deciding whether to veneer or to saw up a log, and how...

  13. Image based Monument Recognition using Graph based Visual Saliency

    DEFF Research Database (Denmark)

    Kalliatakis, Grigorios; Triantafyllidis, Georgios

    2013-01-01

    This article presents an image-based application aiming at simple image classification of well-known monuments in the area of Heraklion, Crete, Greece. This classification takes place by utilizing Graph Based Visual Saliency (GBVS) and employing Scale Invariant Feature Transform (SIFT) or Speeded...

  14. Image based measurement systems: object recognition and parameter estimation

    NARCIS (Netherlands)

    van der Heijden, Ferdinand

    1994-01-01

    What makes this book unique is that besides information on image processing of objects to yield knowledge, the author has devoted a lot of thought to the measurement factor of image processing. This is of direct practical use in numerous sectors from industrial quality and robotics to medicine and

  15. Principal component analysis of image gradient orientations for face recognition

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    We introduce the notion of Principal Component Analysis (PCA) of image gradient orientations. As image data is typically noisy, but noise is substantially different from Gaussian, traditional PCA of pixel intensities very often fails to estimate reliably the low-dimensional subspace of a given data

  16. Image based measurement systems: object recognition and parameter estimation

    NARCIS (Netherlands)

    van der Heijden, Ferdinand

    What makes this book unique is that besides information on image processing of objects to yield knowledge, the author has devoted a lot of thought to the measurement factor of image processing. This is of direct practical use in numerous sectors from industrial quality and robotics to medicine and

  17. Occlusion removal method of partially occluded 3D object using sub-image block matching in computational integral imaging.

    Science.gov (United States)

    Shin, Dong-Hak; Lee, Byung-Gook; Lee, Joon-Jae

    2008-10-13

    In this paper, we propose an occlusion removal method using sub-image block matching for improved recognition of partially occluded 3D objects in computational integral imaging (CII). When 3D plane images are reconstructed in CII, occlusion degrades the resolution of reconstructed images. To overcome this problem, we apply the sub-image transform to elemental image array (EIA) and these sub-images are employed using block matching method for depth estimation. Based on the estimated depth information, we remove the unknown occlusion. After completing the occlusion removal for all sub-images, we obtain the modified EIA without occlusion information through the inverse sub-image transform. Finally, the 3D plane images are reconstructed by using a computational integral imaging reconstruction method with the modified EIA. The proposed method can provide a substantial gain in terms of the visual quality of 3D reconstructed images. To show the usefulness of the proposed method we carry out some experiments and the results are presented.

  18. Target recognition in passive terahertz image of human body

    Science.gov (United States)

    Zhao, Ran; Zhao, Yuan-meng; Deng, Chao; Zhang, Cun-lin; Li, Yue

    2014-11-01

    THz radiation can penetrate through many nonpolar dielectric materials and can be used for nondestructive/noninvasive sensing and imaging of targets under nonpolar, nonmetallic covers or containers. Thus using THz systems to "see through" concealing barriers (i.e. packaging, corrugated cardboard, clothing) has been proposed as a new security screening method. Objects that can be detected by THz include concealed weapons, explosives, and chemical agents under clothing. Passive THz imaging system can detect THz wave from human body without transmit any electromagnetic wave, and the suspicious objects will become visible because the THz wave is blocked by this items. We can find out whether or not someone is carrying dangerous objects through this image. In this paper, the THz image enhancement, segmentation and contour extraction algorithms were studied to achieve effective target image detection. First, the terahertz images are enhanced and their grayscales are stretched. Then we apply global threshold segmentation to extract the target, and finally the targets are marked on the image. Experimental results showed that the algorithm proposed in this paper can extract and mark targets effectively, so that people can identify suspicious objects under clothing quickly. The algorithm can significantly improve the usefulness of the terahertz security apparatus.

  19. Functional recognition imaging using artificial neural networks: applications to rapid cellular identification via broadband electromechanical response

    Energy Technology Data Exchange (ETDEWEB)

    Nikiforov, M P; Guo, S; Kalinin, S V; Jesse, S [Oak Ridge National Laboratory (ORNL), Oak Ridge, TN 37831 (United States); Reukov, V V; Thompson, G L; Vertegel, A A, E-mail: sergei2@ornl.go [Department of Bioengineering, Clemson University, Clemson, SC 29634 (United States)

    2009-10-07

    Functional recognition imaging in scanning probe microscopy (SPM) using artificial neural network identification is demonstrated. This approach utilizes statistical analysis of complex SPM responses at a single spatial location to identify the target behavior, which is reminiscent of associative thinking in the human brain, obviating the need for analytical models. We demonstrate, as an example of recognition imaging, rapid identification of cellular organisms using the difference in electromechanical activity over a broad frequency range. Single-pixel identification of model Micrococcus lysodeikticus and Pseudomonas fluorescens bacteria is achieved, demonstrating the viability of the method.

  20. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition

    Directory of Open Access Journals (Sweden)

    Rong Wang

    2015-01-01

    Full Text Available In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  1. Automatic Target Recognition in Synthetic Aperture Sonar Images Based on Geometrical Feature Extraction

    Directory of Open Access Journals (Sweden)

    J. Del Rio Vera

    2009-01-01

    Full Text Available This paper presents a new supervised classification approach for automated target recognition (ATR in SAS images. The recognition procedure starts with a novel segmentation stage based on the Hilbert transform. A number of geometrical features are then extracted and used to classify observed objects against a previously compiled database of target and non-target features. The proposed approach has been tested on a set of 1528 simulated images created by the NURC SIGMAS sonar model, achieving up to 95% classification accuracy.

  2. Recognition imaging of chromatin and chromatin-remodeling complexes in the atomic force microscope.

    Science.gov (United States)

    Lohr, Dennis; Wang, Hongda; Bash, Ralph; Lindsay, Stuart M

    2009-01-01

    Atomic force microscopy (AFM) can directly visualize single molecules in solution, which makes it an extremely powerful technique for carrying out studies of biological complexes and the processes in which they are involved. A recent development, called Recognition Imaging, allows the identification of a specific type of protein in solution AFM images, a capability that greatly enhances the power of the AFM approach for studies of complex biological materials. In this technique, an antibody against the protein of interest is attached to an AFM tip. Scanning a sample with this tip generates a typical topographic image simultaneously and in exact spatial registration with a "recognition image." The latter identifies the locations of antibody-antigen binding events and thus the locations of the protein of interest in the image field. The recognition image can be electronically superimposed on the topographic image, providing a very accurate map of specific protein locations in the topographic image. This technique has been mainly used in in vitro studies of biological complexes and reconstituted chromatin, but has great potential for studying chromatin and protein complexes isolated from nuclei.

  3. Image and Sensor Data Processing for Target Acquisition and Recognition.

    Science.gov (United States)

    1980-11-01

    de points du bord de la cible, done, de points oai le contracs local eat fort dana L’image...tdristiques de la prem...Are photo et on dresse le tableau des vecteurs de translation avec paur cha- cun le noinbre de couples de points qui se...AEROSPACE RESEARCH AND DEVELOPMENT (ORGANISATION DU TRAITE DE L’ATLANTIQUE NORD) AGARDonferenceJoceedin io.290 IMAGE AND SENSOR DATA PROCESSING FOR

  4. Image processing tool for automatic feature recognition and quantification

    Science.gov (United States)

    Chen, Xing; Stoddard, Ryan J.

    2017-05-02

    A system for defining structures within an image is described. The system includes reading of an input file, preprocessing the input file while preserving metadata such as scale information and then detecting features of the input file. In one version the detection first uses an edge detector followed by identification of features using a Hough transform. The output of the process is identified elements within the image.

  5. Lateral and medial ventral occipitotemporal regions interact during the recognition of images revealed from noise

    Directory of Open Access Journals (Sweden)

    Barbara eNordhjem

    2016-01-01

    Full Text Available Several studies suggest different functional roles for the medial and the lateral ventral sections in object recognition. Texture and surface information is processed in medial regions, while shape information is processed in lateral sections. This begs the question whether and how these functionally specialized sections interact with each other and with early visual cortex to facilitate object recognition. In the current research, we set out to answer this question. In an fMRI study, thirteen subjects viewed and recognized images of objects and animals that were gradually revealed from noise while their brains were being scanned. We applied dynamic causal modeling (DCM – a method to characterize network interactions – to determine the modulatory effect of object recognition on a network comprising the primary visual cortex (V1, the lingual gyrus (LG in medial ventral cortex and the lateral occipital cortex (LO. We found that object recognition modulated the bilateral connectivity between LG and LO. Moreover, the feed-forward connectivity from V1 to LG and LO was modulated, while there was no evidence for feedback from these regions to V1 during object recognition. In particular, the interaction between medial and lateral areas supports a framework in which visual recognition of objects is achieved by networked regions that integrate information on image statistics, scene content and shape – rather than by a single categorically specialized region – within the ventral visual cortex.

  6. Multisensor integration and image recognition using Fuzzy Adaptive Resonance Theory

    Science.gov (United States)

    Singer, Steven M.

    1997-04-01

    The main objective of this work was to investigate the use of 'sensor based real time decision and control technology' applied to actively control the arrestment of aircraft (manned or unmanned). The proposed method is to develop an adaptively controlled system that would locate the aircraft's extended tailhook, predict its position and speed at the time of arrestment, adjust an arresting end effector to actively mate with the arresting hook and remove the aircraft's kinetic energy, thus minimizing the arresting distance and impact stresses. The focus of the work presented in this paper was to explore the use of fuzzy adaptive resonance theorem (fuzzy art) neural network to form a MSI scheme which reduces image data to recognize incoming aircraft and extended tailhook. Using inputs from several image sources a single fused image was generated to give details about range and tailhook characteristics for an F18 naval aircraft. The idea is to partition an image into cells and evaluate each using fuzzy art. Once the incoming aircraft is located in a cell that subimage is again divided into smaller cells. This image is evaluated to locate various parts of the aircraft (i.e., wings, tail, tailhook, etc.). the cell that contains the tailhook provides resolved position information. Multiple images from separate sensors provides opportunity to generate range details overtime.

  7. Improved localization of cellular membrane receptors using combined fluorescence microscopy and simultaneous topography and recognition imaging

    Energy Technology Data Exchange (ETDEWEB)

    Duman, M; Pfleger, M; Chtcheglova, L A; Neundlinger, I; Bozna, B L; Ebner, A; Schuetz, G J; Hinterdorfer, P [Institute for Biophysics, University of Linz, Altenbergerstrasse 69, A-4040 Linz (Austria); Zhu, R; Mayer, B [Christian Doppler Laboratory for Nanoscopic Methods in Biophysics, Institute for Biophysics, University of Linz, Altenbergerstrasse 69, A-4040 Linz (Austria); Rankl, C; Moertelmaier, M; Kada, G; Kienberger, F [Agilent Technologies Austria GmbH, Aubrunnerweg 11, A-4040 Linz (Austria); Salio, M; Shepherd, D; Polzella, P; Cerundolo, V [Cancer Research UK Tumor Immunology Group, Weatherall Institute of Molecular Medicine, Nuffield Department of Medicine, University of Oxford, Oxford OX3 9DS (United Kingdom); Dieudonne, M, E-mail: ferry_kienberger@agilent.com [Agilent Technologies Belgium, Wingepark 51, Rotselaar, AN B-3110 (Belgium)

    2010-03-19

    The combination of fluorescence microscopy and atomic force microscopy has a great potential in single-molecule-detection applications, overcoming many of the limitations coming from each individual technique. Here we present a new platform of combined fluorescence and simultaneous topography and recognition imaging (TREC) for improved localization of cellular receptors. Green fluorescent protein (GFP) labeled human sodium-glucose cotransporter (hSGLT1) expressed Chinese Hamster Ovary (CHO) cells and endothelial cells (MyEnd) from mouse myocardium stained with phalloidin-rhodamine were used as cell systems to study AFM topography and fluorescence microscopy on the same surface area. Topographical AFM images revealed membrane features such as lamellipodia, cytoskeleton fibers, F-actin filaments and small globular structures with heights ranging from 20 to 30 nm. Combined fluorescence and TREC imaging was applied to detect density, distribution and localization of YFP-labeled CD1d molecules on {alpha}-galactosylceramide ({alpha}GalCer)-loaded THP1 cells. While the expression level, distribution and localization of CD1d molecules on THP1 cells were detected with fluorescence microscopy, the nanoscale distribution of binding sites was investigated with molecular recognition imaging by using a chemically modified AFM tip. Using TREC on the inverted light microscope, the recognition sites of cell receptors were detected in recognition images with domain sizes ranging from {approx} 25 to {approx} 160 nm, with the smaller domains corresponding to a single CD1d molecule.

  8. [Advantages and Application Prospects of Deep Learning in Image Recognition and Bone Age Assessment].

    Science.gov (United States)

    Hu, T H; Wan, L; Liu, T A; Wang, M W; Chen, T; Wang, Y H

    2017-12-01

    Deep learning and neural network models have been new research directions and hot issues in the fields of machine learning and artificial intelligence in recent years. Deep learning has made a breakthrough in the applications of image and speech recognitions, and also has been extensively used in the fields of face recognition and information retrieval because of its special superiority. Bone X-ray images express different variations in black-white-gray gradations, which have image features of black and white contrasts and level differences. Based on these advantages of deep learning in image recognition, we combine it with the research of bone age assessment to provide basic datum for constructing a forensic automatic system of bone age assessment. This paper reviews the basic concept and network architectures of deep learning, and describes its recent research progress on image recognition in different research fields at home and abroad, and explores its advantages and application prospects in bone age assessment. Copyright© by the Editorial Department of Journal of Forensic Medicine.

  9. Impaired visual processing preceding image recognition in Parkinson's disease patients with visual hallucinations.

    Science.gov (United States)

    Meppelink, Anne Marthe; de Jong, Bauke M; Renken, Remco; Leenders, Klaus L; Cornelissen, Frans W; van Laar, Teus

    2009-11-01

    Impaired visual processing may play a role in the pathophysiology of visual hallucinations in Parkinson's disease. In order to study involved neuronal circuitry, we assessed cerebral activation patterns both before and during recognition of gradually revealed images in Parkinson's disease patients with visual hallucinations (PDwithVHs), Parkinson's disease patients without visual hallucinations (PDnonVHs) and healthy controls. We hypothesized that, before image recognition, PDwithVHs would show reduced bottom-up visual activation in occipital-temporal areas and increased (pre)frontal activation, reflecting increased top-down demand. Overshoot of the latter has been proposed to play a role in generating visual hallucinations. Nine non-demented PDwithVHs, 14 PDnonVHs and 13 healthy controls were scanned on a 3 Tesla magnetic resonance imaging scanner. Static images of animals and objects gradually appearing out of random visual noise were used in an event-related design paradigm. Analyses were time-locked on the moment of image recognition, indicated by the subjects' button-press. Subjects were asked to press an additional button on a colour-changing fixation dot, to keep attention and motor action constant and to assess reaction times. Data pre-processing and statistical analysis were performed with statistical parametric mapping-5 software. Bilateral activation of the fusiform and lingual gyri was seen during image recognition in all groups (P image recognition, PDwithVHs showed reduced activation of the lateral occipital cortex, compared with both PDnonVHs and healthy controls. In addition, reduced activation of extrastriate temporal visual cortices was seen just before image recognition in PDwithVHs. The association between increased vulnerability for visual hallucinations in Parkinson's disease and impaired visual object processing in occipital and temporal extrastriate visual cortices supported the hypothesis of impaired bottom-up visual processing in PDwith

  10. Image processing strategies based on saliency segmentation for object recognition under simulated prosthetic vision.

    Science.gov (United States)

    Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu

    2018-01-01

    Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Bayesian Action–Perception Computational Model: Interaction of Production and Recognition of Cursive Letters

    Science.gov (United States)

    Gilet, Estelle; Diard, Julien; Bessière, Pierre

    2011-01-01

    In this paper, we study the collaboration of perception and action representations involved in cursive letter recognition and production. We propose a mathematical formulation for the whole perception–action loop, based on probabilistic modeling and Bayesian inference, which we call the Bayesian Action–Perception (BAP) model. Being a model of both perception and action processes, the purpose of this model is to study the interaction of these processes. More precisely, the model includes a feedback loop from motor production, which implements an internal simulation of movement. Motor knowledge can therefore be involved during perception tasks. In this paper, we formally define the BAP model and show how it solves the following six varied cognitive tasks using Bayesian inference: i) letter recognition (purely sensory), ii) writer recognition, iii) letter production (with different effectors), iv) copying of trajectories, v) copying of letters, and vi) letter recognition (with internal simulation of movements). We present computer simulations of each of these cognitive tasks, and discuss experimental predictions and theoretical developments. PMID:21674043

  12. Multiresolution, Geometric, and Learning Methods in Statistical Image Processing, Object Recognition, and Sensor Fusion

    National Research Council Canada - National Science Library

    Willsky, Alan

    2004-01-01

    .... Our research blends methods from several fields-statistics and probability, signal and image processing, mathematical physics, scientific computing, statistical learning theory, and differential...

  13. Image Processing Strategies Based on a Visual Saliency Model for Object Recognition Under Simulated Prosthetic Vision.

    Science.gov (United States)

    Wang, Jing; Li, Heng; Fu, Weizhen; Chen, Yao; Li, Liming; Lyu, Qing; Han, Tingting; Chai, Xinyu

    2016-01-01

    Retinal prostheses have the potential to restore partial vision. Object recognition in scenes of daily life is one of the essential tasks for implant wearers. Still limited by the low-resolution visual percepts provided by retinal prostheses, it is important to investigate and apply image processing methods to convey more useful visual information to the wearers. We proposed two image processing strategies based on Itti's visual saliency map, region of interest (ROI) extraction, and image segmentation. Itti's saliency model generated a saliency map from the original image, in which salient regions were grouped into ROI by the fuzzy c-means clustering. Then Grabcut generated a proto-object from the ROI labeled image which was recombined with background and enhanced in two ways--8-4 separated pixelization (8-4 SP) and background edge extraction (BEE). Results showed that both 8-4 SP and BEE had significantly higher recognition accuracy in comparison with direct pixelization (DP). Each saliency-based image processing strategy was subject to the performance of image segmentation. Under good and perfect segmentation conditions, BEE and 8-4 SP obtained noticeably higher recognition accuracy than DP, and under bad segmentation condition, only BEE boosted the performance. The application of saliency-based image processing strategies was verified to be beneficial to object recognition in daily scenes under simulated prosthetic vision. They are hoped to help the development of the image processing module for future retinal prostheses, and thus provide more benefit for the patients. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  14. Advanced proton imaging in computed tomography

    CERN Document Server

    Mattiazzo, S; Giubilato, P; Pantano, D; Pozzobon, N; Snoeys, W; Wyss, J

    2015-01-01

    In recent years the use of hadrons for cancer radiation treatment has grown in importance, and many facilities are currently operational or under construction worldwide. To fully exploit the therapeutic advantages offered by hadron therapy, precise body imaging for accurate beam delivery is decisive. Proton computed tomography (pCT) scanners, currently in their R&D phase, provide the ultimate 3D imaging for hadrons treatment guidance. A key component of a pCT scanner is the detector used to track the protons, which has great impact on the scanner performances and ultimately limits its maximum speed. In this article, a novel proton-tracking detector was presented that would have higher scanning speed, better spatial resolution and lower material budget with respect to present state-of-the-art detectors, leading to enhanced performances. This advancement in performances is achieved by employing the very latest development in monolithic active pixel detectors (to build high granularity, low material budget, ...

  15. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark

    2012-01-01

    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  16. Computing Challenges in Coded Mask Imaging

    Science.gov (United States)

    Skinner, Gerald

    2009-01-01

    This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.

  17. Image quality in coronary computed tomography angiography

    DEFF Research Database (Denmark)

    Precht, Helle; Gerke, Oke; Thygesen, Jesper

    2018-01-01

    Background Computed tomography (CT) technology is rapidly evolving and software solution developed to optimize image quality and/or lower radiation dose. Purpose To investigate the influence of adaptive statistical iterative reconstruction (ASIR) at different radiation doses in coronary CT...... angiography (CCTA) in detailed image quality. Material and Methods A total of 160 CCTA were reconstructed as follows: 55 scans with filtered back projection (FBP) (650 mA), 51 scans (455 mA) with 30% ASIR (ASIR30), and 54 scans (295 mA) with 60% ASIR (ASIR60). For each reconstruction, subjective image quality...... was assessed by five independent certified cardiologists using a visual grading analysis (VGA) with five predefined image quality criteria consisting of a 5-point scale. Objective measures were contrast, noise, and contrast-to-noise ratio (CNR). Results The CTDIvol resulted in 10.3 mGy, 7.4 mGy, and 4.6 m...

  18. Automatic recognition of lactating sow behaviors through depth image processing

    Science.gov (United States)

    Manual observation and classification of animal behaviors is laborious, time-consuming, and of limited ability to process large amount of data. A computer vision-based system was developed that automatically recognizes sow behaviors (lying, sitting, standing, kneeling, feeding, drinking, and shiftin...

  19. Efficient leukocyte segmentation and recognition in peripheral blood image.

    Science.gov (United States)

    Shirazi, Syed H; Umar, Arif Iqbal; Naz, Saeeda; Razzak, Muhammad I

    2016-05-18

    Blood cell count, also known as differential count of various types of blood cells, provides valuable information in order to assess variety of diseases like AIDS, leukemia and blood cancer. Manual techniques are still used in diseases diagnosis that is very lingering and tedious process. However, machine based automatic analysis of leukocyte is a powerful tool that could reduce the human errors, improve the accuracy, and minimize the required time for blood cell analysis. However, leukocyte segmentation is a challenging process due to the complexity of the blood cell image; therefore, this task remains unresolved issue in the blood cell segmentation. The aim of this work is to develop an efficient leukocyte cell segmentation and classification system. This paper presents an efficient strategy to segment cell images. This has been achieved by using Wiener filter along with Curvelet transform for image enhancement and noise elimination in order to elude false edges. We have also used combination of entropy filter, thresholding and mathematical morphology for obtaining image segmentation and boundary detection, whereas we have used back-propagation neural network for leukocyte classification into its sub classes. As a result, the generated segmentation results are fruitful in a sense that we have overcome the problem of overlapping cells. We have obtained 100%, 96.15%, 92.30%, 92.30% and 96.15% accuracy for basophil, eosinophil, monocyte, lymphocyte and neutrophil respectively.

  20. Intelligent Image Recognition System for Marine Fouling Using Softmax Transfer Learning and Deep Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    C. S. Chin

    2017-01-01

    Full Text Available The control of biofouling on marine vessels is challenging and costly. Early detection before hull performance is significantly affected is desirable, especially if “grooming” is an option. Here, a system is described to detect marine fouling at an early stage of development. In this study, an image of fouling can be transferred wirelessly via a mobile network for analysis. The proposed system utilizes transfer learning and deep convolutional neural network (CNN to perform image recognition on the fouling image by classifying the detected fouling species and the density of fouling on the surface. Transfer learning using Google’s Inception V3 model with Softmax at last layer was carried out on a fouling database of 10 categories and 1825 images. Experimental results gave acceptable accuracies for fouling detection and recognition.

  1. Modeling the Process of Color Image Recognition Using ART2 Neural Network

    Directory of Open Access Journals (Sweden)

    Todor Petkov

    2015-09-01

    Full Text Available This paper thoroughly describes the use of unsupervised adaptive resonance theory ART2 neural network for the purposes of image color recognition of x-ray images and images taken by nuclear magnetic resonance. In order to train the network, the pixel values of RGB colors are regarded as learning vectors with three values, one for red, one for green and one for blue were used. At the end the trained network was tested by the values of pictures and determines the design, or how to visualize the converted picture. As a result we had the same pictures with colors according to the network. Here we use the generalized net to prepare a model that describes the process of the color image recognition.

  2. Occlusion removal method of partially occluded object using variance in computational integral imaging

    Science.gov (United States)

    Lee, Byung-Gook; Kang, Ho-Hyun; Kim, Eun-Soo

    2010-06-01

    Computational integral imaging is a promising technique in partially occluded 3D object recognition. With elemental images (EIs) of partially occluded 3D object, the plane image of 3D object to be interested was reconstructed at the location where 3D object was originally located using a computational integral imaging reconstruction (CIIR) algorithm. However, occlusion prevents the high-resolution reconstructed image due to superimposing its defocusing and blurred image at the same time. To overcome this problem, in this paper, we propose a novel occlusion removal method of partially occluded 3D object in computational integral imaging. In the proposed method, we use the variance of ray intensity distribution emitting from EIs and then a series of variance plane images from the EIs is used to estimate the area and distance of occlusion since the intensity variance of focused plane image for occlusion is the lowest at the occlusion location. On the basis of the extracted information, occlusion in the EIs is simply eliminated. Then, the plane images are reconstructed with the CIIR algorithm for the occlusion removed EIs, in which we can obtain the improved high-resolution plane image. To show the feasibility of our proposed scheme, some experiments were carried out and its results are presented as well. [Figure not available: see fulltext.

  3. The 3-D image recognition based on fuzzy neural network technology

    Science.gov (United States)

    Hirota, Kaoru; Yamauchi, Kenichi; Murakami, Jun; Tanaka, Kei

    1993-01-01

    Three dimensional stereoscopic image recognition system based on fuzzy-neural network technology was developed. The system consists of three parts; preprocessing part, feature extraction part, and matching part. Two CCD color camera image are fed to the preprocessing part, where several operations including RGB-HSV transformation are done. A multi-layer perception is used for the line detection in the feature extraction part. Then fuzzy matching technique is introduced in the matching part. The system is realized on SUN spark station and special image input hardware system. An experimental result on bottle images is also presented.

  4. Object Attention Patches for Text Detection and Recognition in Scene Images using SIFT

    NARCIS (Netherlands)

    Sriman, Bowornrat; Schomaker, Lambertus; De Marsico, Maria; Figueiredo, Mário; Fred, Ana

    2015-01-01

    Natural urban scene images contain many problems for character recognition such as luminance noise, varying font styles or cluttered backgrounds. Detecting and recognizing text in a natural scene is a difficult problem. Several techniques have been proposed to overcome these problems. These are,

  5. Spatiotemporal Analysis of RGB-D-T Facial Images for Multimodal Pain Level Recognition

    DEFF Research Database (Denmark)

    Irani, Ramin; Nasrollahi, Kamal; Oliu Simon, Marc

    2015-01-01

    facial images for pain detection and pain intensity level recognition. For this purpose, we extract energies released by facial pixels using a spatiotemporal filter. Experiments on a group of 12 elderly people applying the multimodal approach show that the proposed method successfully detects pain...

  6. Predicting Performance of a Face Recognition System Based on Image Quality

    NARCIS (Netherlands)

    Dutta, A.

    2015-01-01

    In this dissertation, we focus on several aspects of models that aim to predict performance of a face recognition system. Performance prediction models are commonly based on the following two types of performance predictor features: a) image quality features; and b) features derived solely from

  7. Multimodality Imaging of Right-Sided (Tricuspid Valve Papillary Fibroelastoma: Recognition of a Surgically Remediable Disease

    Directory of Open Access Journals (Sweden)

    Shantanu V. Srivatsa

    2013-09-01

    Full Text Available Presentation of an increasingly recognized right-sided primary valve tumor of clinical importance: the tricuspid valve papillary fibroelastoma (PF. Early recognition and surgical intervention is emphasized for valvular PF, which carries a significant risk of morbidity and mortality. Newer imaging techniques, including CT and MRI, assist in localizing and differentiating PF from alternative cardiac pathology.

  8. Machine recognition of navel orange worm damage in x-ray images of pistachio nuts

    Science.gov (United States)

    Keagy, Pamela M.; Parvin, Bahram; Schatzki, Thomas F.

    1995-01-01

    Insect infestation increases the probability of aflatoxin contamination in pistachio nuts. A non- destructive test is currently not available to determine the insect content of pistachio nuts. This paper uses film X-ray images of various types of pistachio nuts to assess the possibility of machine recognition of insect infested nuts. Histogram parameters of four derived images are used in discriminant functions to select insect infested nuts from specific processing streams.

  9. Machine recognition of navel orange worm damage in X-ray images of pistachio nuts

    Energy Technology Data Exchange (ETDEWEB)

    Keagy, P.M.; Schatzki, T.F. [USDA-ARS Western Regional Research Center, Albany, CA (United States); Parvin, B. [Lawrence Berkeley Lab., CA (United States)

    1994-11-01

    Insect infestation increases the probability of aflatoxin contamination in pistachio nuts. A non-destructive test is currently not available to determine the insect content of pistachio nuts. This paper presents the use of film X-ray images of various types of pistachio nuts to assess the possibility of machine recognition of insect infested nuts. Histogram parameters of four derived images are used in discriminant functions to select insect infested nuts from specific processing streams.

  10. The use of global image characteristics for neural network pattern recognitions

    Science.gov (United States)

    Kulyas, Maksim O.; Kulyas, Oleg L.; Loshkarev, Aleksey S.

    2017-04-01

    The recognition system is observed, where the information is transferred by images of symbols generated by a television camera. For descriptors of objects the coefficients of two-dimensional Fourier transformation generated in a special way. For solution of the task of classification the one-layer neural network trained on reference images is used. Fast learning of a neural network with a single neuron calculation of coefficients is applied.

  11. MR imaging tissue characterization by means of pattern recognition

    International Nuclear Information System (INIS)

    Ranade, S.S.; Lindon, J.C.; Livingston, D.J.

    1990-01-01

    The purpose of this paper is to explore the role of trace metal profiles as factors influencing tissue relaxation times, with the aim of better tissue discrimination and predictability of the neoplastic state. Proton spin-lattice relaxation times and trace metal contents of iron, copper, zinc, and magnesium were estimated from surgically resected, histopathologically diagnosed neoplasms from 10 body sites. Computer-based analysis with dimension reduction, mapping techniques, and supervised learning methods were used

  12. BIOCAT: a pattern recognition platform for customizable biological image classification and annotation.

    Science.gov (United States)

    Zhou, Jie; Lamichhane, Santosh; Sterne, Gabriella; Ye, Bing; Peng, Hanchuan

    2013-10-04

    Pattern recognition algorithms are useful in bioimage informatics applications such as quantifying cellular and subcellular objects, annotating gene expressions, and classifying phenotypes. To provide effective and efficient image classification and annotation for the ever-increasing microscopic images, it is desirable to have tools that can combine and compare various algorithms, and build customizable solution for different biological problems. However, current tools often offer a limited solution in generating user-friendly and extensible tools for annotating higher dimensional images that correspond to multiple complicated categories. We develop the BIOimage Classification and Annotation Tool (BIOCAT). It is able to apply pattern recognition algorithms to two- and three-dimensional biological image sets as well as regions of interest (ROIs) in individual images for automatic classification and annotation. We also propose a 3D anisotropic wavelet feature extractor for extracting textural features from 3D images with xy-z resolution disparity. The extractor is one of the about 20 built-in algorithms of feature extractors, selectors and classifiers in BIOCAT. The algorithms are modularized so that they can be "chained" in a customizable way to form adaptive solution for various problems, and the plugin-based extensibility gives the tool an open architecture to incorporate future algorithms. We have applied BIOCAT to classification and annotation of images and ROIs of different properties with applications in cell biology and neuroscience. BIOCAT provides a user-friendly, portable platform for pattern recognition based biological image classification of two- and three- dimensional images and ROIs. We show, via diverse case studies, that different algorithms and their combinations have different suitability for various problems. The customizability of BIOCAT is thus expected to be useful for providing effective and efficient solutions for a variety of biological

  13. Computational Intelligence for Medical Imaging Simulations.

    Science.gov (United States)

    Chang, Victor

    2017-11-25

    This paper describes how to simulate medical imaging by computational intelligence to explore areas that cannot be easily achieved by traditional ways, including genes and proteins simulations related to cancer development and immunity. This paper has presented simulations and virtual inspections of BIRC3, BIRC6, CCL4, KLKB1 and CYP2A6 with their outputs and explanations, as well as brain segment intensity due to dancing. Our proposed MapReduce framework with the fusion algorithm can simulate medical imaging. The concept is very similar to the digital surface theories to simulate how biological units can get together to form bigger units, until the formation of the entire unit of biological subject. The M-Fusion and M-Update function by the fusion algorithm can achieve a good performance evaluation which can process and visualize up to 40 GB of data within 600 s. We conclude that computational intelligence can provide effective and efficient healthcare research offered by simulations and visualization.

  14. Clustering of Farsi sub-word images for whole-book recognition

    Science.gov (United States)

    Soheili, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier

    2015-01-01

    Redundancy of word and sub-word occurrences in large documents can be effectively utilized in an OCR system to improve recognition results. Most OCR systems employ language modeling techniques as a post-processing step; however these techniques do not use important pictorial information that exist in the text image. In case of large-scale recognition of degraded documents, this information is even more valuable. In our previous work, we proposed a subword image clustering method for the applications dealing with large printed documents. In our clustering method, the ideal case is when all equivalent sub-word images lie in one cluster. To overcome the issues of low print quality, the clustering method uses an image matching algorithm for measuring the distance between two sub-word images. The measured distance with a set of simple shape features were used to cluster all sub-word images. In this paper, we analyze the effects of adding more shape features on processing time, purity of clustering, and the final recognition rate. Previously published experiments have shown the efficiency of our method on a book. Here we present extended experimental results and evaluate our method on another book with totally different font face. Also we show that the number of the new created clusters in a page can be used as a criteria for assessing the quality of print and evaluating preprocessing phases.

  15. MoCog1: A computer simulation of recognition-primed human decision making, considering emotions

    Science.gov (United States)

    Gevarter, William B.

    1992-01-01

    The successful results of the first stage of a research effort to develop a versatile computer model of motivated human cognitive behavior are reported. Most human decision making appears to be an experience-based, relatively straightforward, largely automatic response to situations, utilizing cues and opportunities perceived from the current environment. The development, considering emotions, of the architecture and computer program associated with such 'recognition-primed' decision-making is described. The resultant computer program (MoCog1) was successfully utilized as a vehicle to simulate earlier findings that relate how an individual's implicit theories orient the individual toward particular goals, with resultant cognitions, affects, and behavior in response to their environment.

  16. Computed tomography and magnetic resonance imaging of unusual causes of ankle pain

    International Nuclear Information System (INIS)

    Kaushik, S.

    2006-01-01

    Computed tomography and MRI are frequently utilized to evaluate ankle pain that remains unexplained by radiography. The most common causes of ankle pain are related to trauma and the imaging appearances of these entities are well established in the radiologic and orthopedic literature. A smaller percentage is comprised of non-traumatic disorders. Our goal is to emphasize the value of CT and MRI in recognition of these less common and unusual causes of ankle pain. Copyright (2006) Blackwell Science Pty Ltd

  17. Automatic recognition of fundamental tissues on histology images of the human cardiovascular system.

    Science.gov (United States)

    Mazo, Claudia; Trujillo, Maria; Alegre, Enrique; Salazar, Liliana

    2016-10-01

    Cardiovascular disease is the leading cause of death worldwide. Therefore, techniques for improving diagnosis and treatment in this field have become key areas for research. In particular, approaches for tissue image processing may support education system and medical practice. In this paper, an approach to automatic recognition and classification of fundamental tissues, using morphological information is presented. Taking a 40× or 10× histological image as input, three clusters are created with the k-means algorithm using a structural tensor and the red and the green channels. Loose connective tissue, light regions and cell nuclei are recognised on 40× images. Then, the cell nuclei's features - shape and spatial projection - and light regions are used to recognise and classify epithelial cells and tissue into flat, cubic and cylindrical. In a similar way, light regions, loose connective and muscle tissues are recognised on 10× images. Finally, the tissue's function and composition are used to refine muscle tissue recognition. Experimental validation is then carried out by histologist following expert criteria, along with manually annotated images that are used as a ground-truth. The results revealed that the proposed approach classified the fundamental tissues in a similar way to the conventional method employed by histologists. The proposed automatic recognition approach provides for epithelial tissues a sensitivity of 0.79 for cubic, 0.85 for cylindrical and 0.91 for flat. Furthermore, the experts gave our method an average score of 4.85 out of 5 in the recognition of loose connective tissue and 4.82 out of 5 for muscle tissue recognition. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. A computational imaging target specific detectivity metric

    Science.gov (United States)

    Preece, Bradley L.; Nehmetallah, George

    2017-05-01

    Due to the large quantity of low-cost, high-speed computational processing available today, computational imaging (CI) systems are expected to have a major role for next generation multifunctional cameras. The purpose of this work is to quantify the performance of theses CI systems in a standardized manner. Due to the diversity of CI system designs that are available today or proposed in the near future, significant challenges in modeling and calculating a standardized detection signal-to-noise ratio (SNR) to measure the performance of these systems. In this paper, we developed a path forward for a standardized detectivity metric for CI systems. The detectivity metric is designed to evaluate the performance of a CI system searching for a specific known target or signal of interest, and is defined as the optimal linear matched filter SNR, similar to the Hotelling SNR, calculated in computational space with special considerations for standardization. Therefore, the detectivity metric is designed to be flexible, in order to handle various types of CI systems and specific targets, while keeping the complexity and assumptions of the systems to a minimum.

  19. Audio-visual gender recognition

    Science.gov (United States)

    Liu, Ming; Xu, Xun; Huang, Thomas S.

    2007-11-01

    Combining different modalities for pattern recognition task is a very promising field. Basically, human always fuse information from different modalities to recognize object and perform inference, etc. Audio-Visual gender recognition is one of the most common task in human social communication. Human can identify the gender by facial appearance, by speech and also by body gait. Indeed, human gender recognition is a multi-modal data acquisition and processing procedure. However, computational multimodal gender recognition has not been extensively investigated in the literature. In this paper, speech and facial image are fused to perform a mutli-modal gender recognition for exploring the improvement of combining different modalities.

  20. Color Histograms Adapted to Query-Target Images for Object Recognition across Illumination Changes

    Directory of Open Access Journals (Sweden)

    Jack-Gérard Postaire

    2005-08-01

    Full Text Available Most object recognition schemes fail in case of illumination changes between the color image acquisitions. One of the most widely used solutions to cope with this problem is to compare the images by means of the intersection between invariant color histograms. The main originality of our approach is to cope with the problem of illumination changes by analyzing each pair of query and target images constructed during the retrieval, instead of considering each image of the database independently from each other. In this paper, we propose a new approach which determines color histograms adapted to each pair of images. These adapted color histograms are obtained so that their intersection is higher when the two images are similar than when they are different. The adapted color histograms processing is based on an original model of illumination changes based on rank measures of the pixels within the color component images.

  1. Computer tomographic imaging of rabbit bulbourethral glands

    International Nuclear Information System (INIS)

    Dimitrov, R.

    2010-01-01

    The aim of the study was to utilize the obtained data for differentiation of normal and pathologically altered bulbourethral glands in rabbits with regard to using this animal species as a model for studying diseases in this organ in humans. MATERIAL AND METHODS: Ten sexually mature healthy male white New Zealand rabbits, 12 months old, weighed 2.8−3.2 kg were investigated. The animals were anesthetized. Scans were done at 2 mm intervals and the image reconstruction was three-dimensional. RESULTS: Rabbit bulbourethral glands were observed as a transversely oval homogeneous, relatively hyperdense structure against the surrounding soft tissues. They are visualized in the transverse cut of the pelvic outlet in the plane through the cranial part of cg2, the body of ischium, cranially to tuber ischiadicum and dorsally to the caudal part of symphysis pubis –sciatic arch. The glandular margins are adequately distinguished from the adjacent soft tissue structures. The density of the rabbit bulbourethral glands was similar to this of the soft tissues. CONCLUSION: The data obtained by the computed tomographic imaging of the rabbit bulbourethral glands could be used as an anatomical reference in the diagnosis and interpretation of imaging findings of various pathological states of the gland in this species, as well as in utilization of the rabbit as an animal model for studying diseases of this organ in humans, particularly diverticula, stenosis, lithiasis and valves

  2. Detection and recognition of road markings in panoramic images

    Science.gov (United States)

    Li, Cheng; Creusen, Ivo; Hazelhoff, Lykele; de With, Peter H. N.

    2015-03-01

    Detection of road lane markings is attractive for practical applications such as advanced driver assistance systems and road maintenance. This paper proposes a system to detect and recognize road lane markings in panoramic images. The system can be divided into four stages. First, an inverse perspective mapping is applied to the original panoramic image to generate a top-view road view, in which the potential road markings are segmented based on their intensity difference compared to the surrounding pixels. Second, a feature vector of each potential road marking segment is extracted by calculating the Euclidean distance between the center and the boundary at regular angular steps. Third, the shape of each segment is classified using a Support Vector Machine (SVM). Finally, by modeling the lane markings, previous falsely detected segments can be rejected based on their orientation and position relative to the lane markings. Our experiments show that the system is promising and is capable of recognizing 93%, 95% and 91% of striped line segments, blocks and arrows respectively, as well as 94% of the lane markings.

  3. Recognition of skin melanoma through dermoscopic image analysis

    Science.gov (United States)

    Gómez, Catalina; Herrera, Diana Sofia

    2017-11-01

    Melanoma skin cancer diagnosis can be challenging due to the similarities of the early stage symptoms with regular moles. Standardized visual parameters can be determined and characterized to suspect a melanoma cancer type. The automation of this diagnosis could have an impact in the medical field by providing a tool to support the specialists with high accuracy. The objective of this study is to develop an algorithm trained to distinguish a highly probable melanoma from a non-dangerous mole by the segmentation and classification of dermoscopic mole images. We evaluate our approach on the dataset provided by the International Skin Imaging Collaboration used in the International Challenge Skin Lesion Analysis Towards Melanoma Detection. For the segmentation task, we apply a preprocessing algorithm and use Otsu's thresholding in the best performing color space; the average Jaccard Index in the test dataset is 70.05%. For the subsequent classification stage, we use joint histograms in the YCbCr color space, a RBF Gaussian SVM trained with five features concerning circularity and irregularity of the segmented lesion, and the Gray Level Co-occurrence matrix features for texture analysis. These features are combined to obtain an Average Classification Accuracy of 63.3% in the test dataset.

  4. The computational measurement of apparent motion: a recurrent pattern recognition strategy as an approach to solve the correspondence problem.

    Science.gov (United States)

    Schuling, F H; Altena, P; Mastebroek, H A

    1990-01-01

    In short, the model consists of a two-dimensional set of edge detecting units, modelled according to the zero-crossing detectors introduced first by Marr and Ullman (1981). These detectors are located peripherally in our synthetic vision system and are the input elements for an intelligent recurrent network. The purpose of that network is to recognize and categorize the previously detected contrast changes in a multi-resolution representation of the original image in such a manner that the original information will be decomposed into a relatively small number N of well-defined edge primitives. The advantage of such a construction is that time-consuming pattern recognition has no longer to be done on the originally complex motion-blurred images of moving objects, but on a limited number of categorized forms. Based on a number M of elementary feature attributes for each individual edge primitive, the model is then able to decompose each edge pattern into certain features. In this way an M-dimensional vector can be constructed for each edge. For each sequence of two successive frames a tensor can be calculated containing the distances (measured in M-dimensional feature space) between all features in both images. This procedure yields a set of K-1 tensors for a sequence of K images. After cross-correlation of all N x M feature attributes from image (i) with those from image (i + 1), where i = 1,...,K-1, probability distributions can be computed. The final step is to search for maxima in these probability functions and then to construct from these extremes an optimal motion field. A number of simulation examples will be presented.

  5. Computer approach to recognition of Fuhrman grade of cells in clear-cell renal cell carcinoma.

    Science.gov (United States)

    Kruk, Michal; Osowski, Stanislaw; Markiewicz, Tomasz; Slodkowska, Janina; Koktysz, Robert; Kozlowski, Wojciech; Swiderski, Bartosz

    2014-06-01

    To present a computerized system for recognition of Fuhrman grade of cells in clear-cell renal cell carcinoma on the basis of microscopic images of the neoplasm cells in application of hematoxylin and eosin staining. The applied methods use combined gradient and mathematical morphology to obtain nuclei and classifiers in the form of support vector machine to estimate their Fuhrman grade. The starting point is a microscopic kidney image, which is subject to the advanced methods of preprocessing, leading finally to estimation of Fuhrman grade of cells and the whole analyzed image. The results of the numerical experiments have shown that the proposed nuclei descriptors based on different principles of generation are well connected with the Fuhrman grade. These descriptors have been used as the diagnostic features forming the inputs to the classifier, which performs the final recognition of the cells. The average discrepancy rate between the score of our system and the human expert results, estimated on the basis of over 3,000 nuclei, is below 10%. The obtained results have shown that the system is able to recognize 4 Fuhrman grades of the cells with high statistical accuracy and agreement with different expert scores. This result gives a good perspective to apply the system for supporting and accelerating the research of kidney cancer.

  6. Face recognition across makeup and plastic surgery from real-world images

    Science.gov (United States)

    Moeini, Ali; Faez, Karim; Moeini, Hossein

    2015-09-01

    A study for feature extraction is proposed to handle the problem of facial appearance changes including facial makeup and plastic surgery in face recognition. To extend a face recognition method robust to facial appearance changes, features are individually extracted from facial depth on which facial makeup and plastic surgery have no effect. Then facial depth features are added to facial texture features to perform feature extraction. Accordingly, a three-dimensional (3-D) face is reconstructed from only a single two-dimensional (2-D) frontal image in real-world scenarios. Then the facial depth is extracted from the reconstructed model. Afterward, the dual-tree complex wavelet transform (DT-CWT) is applied to both texture and reconstructed depth images to extract the feature vectors. Finally, the final feature vectors are generated by combining 2-D and 3-D feature vectors, and are then classified by adopting the support vector machine. Promising results have been achieved for makeup-invariant face recognition on two available image databases including YouTube makeup and virtual makeup, and plastic surgery-invariant face recognition on a plastic surgery face database is compared to several state-of-the-art feature extraction methods. Several real-world scenarios are also planned to evaluate the performance of the proposed method on a combination of these three databases with 1102 subjects.

  7. Memristive Computational Architecture of an Echo State Network for Real-Time Speech Emotion Recognition

    Science.gov (United States)

    2015-05-28

    recognition, the emotional status of a human such as anger, fear, happiness etc. are determined based on the speech signals. Human-computer interaction...actors (five male and five female ) recorded 800 utterances. Ten different daily used German sentences were recorded in seven different emotional...k ≤ ci 2(di−k) (di−bi)−(di−ci) , if ci ≤ k ≤ di 0, otherwise (5) where i is the index of the filter, Hi is the response of the ith filter. bi, ci and

  8. Image Classification Using Biomimetic Pattern Recognition with Convolutional Neural Networks Features.

    Science.gov (United States)

    Zhou, Liangji; Li, Qingwu; Huo, Guanying; Zhou, Yan

    2017-01-01

    As a typical deep-learning model, Convolutional Neural Networks (CNNs) can be exploited to automatically extract features from images using the hierarchical structure inspired by mammalian visual system. For image classification tasks, traditional CNN models employ the softmax function for classification. However, owing to the limited capacity of the softmax function, there are some shortcomings of traditional CNN models in image classification. To deal with this problem, a new method combining Biomimetic Pattern Recognition (BPR) with CNNs is proposed for image classification. BPR performs class recognition by a union of geometrical cover sets in a high-dimensional feature space and therefore can overcome some disadvantages of traditional pattern recognition. The proposed method is evaluated on three famous image classification benchmarks, that is, MNIST, AR, and CIFAR-10. The classification accuracies of the proposed method for the three datasets are 99.01%, 98.40%, and 87.11%, respectively, which are much higher in comparison with the other four methods in most cases.

  9. Score Fusion and Decision Fusion for the Performance Improvement of Face Recognition

    Science.gov (United States)

    2013-07-01

    and verify the current findings by using other biometric modalities (like fingerprint, iris , more spectral images) and assess computational...stereo face imaging. I. INTRODUCTION Face recognition has relative low accuracy compared to fingerprint recognition and iris recognition. To... biometric scores used in score fusion, the higher fusion recognition performance achieved. Zheng et al. [6] recently had a brief survey on the

  10. Probability-Based Recognition Framework for Underwater Landmarks Using Sonar Images †.

    Science.gov (United States)

    Lee, Yeongjun; Choi, Jinwoo; Ko, Nak Yong; Choi, Hyun-Taek

    2017-08-24

    This paper proposes a probability-based framework for recognizing underwater landmarks using sonar images. Current recognition methods use a single image, which does not provide reliable results because of weaknesses of the sonar image such as unstable acoustic source, many speckle noises, low resolution images, single channel image, and so on. However, using consecutive sonar images, if the status-i.e., the existence and identity (or name)-of an object is continuously evaluated by a stochastic method, the result of the recognition method is available for calculating the uncertainty, and it is more suitable for various applications. Our proposed framework consists of three steps: (1) candidate selection, (2) continuity evaluation, and (3) Bayesian feature estimation. Two probability methods-particle filtering and Bayesian feature estimation-are used to repeatedly estimate the continuity and feature of objects in consecutive images. Thus, the status of the object is repeatedly predicted and updated by a stochastic method. Furthermore, we develop an artificial landmark to increase detectability by an imaging sonar, which we apply to the characteristics of acoustic waves, such as instability and reflection depending on the roughness of the reflector surface. The proposed method is verified by conducting basin experiments, and the results are presented.

  11. Automatic solar panel recognition and defect detection using infrared imaging

    Science.gov (United States)

    Gao, Xiang; Munson, Eric; Abousleman, Glen P.; Si, Jennie

    2015-05-01

    Failure-free operation of solar panels is of fundamental importance for modern commercial solar power plants. To achieve higher power generation efficiency and longer panel life, a simple and reliable panel evaluation method is required. By using thermal infrared imaging, anomalies can be detected without having to incorporate expensive electrical detection circuitry. In this paper, we propose a solar panel defect detection system, which automates the inspection process and mitigates the need for manual panel inspection in a large solar farm. Infrared video sequences of each array of solar panels are first collected by an infrared camera mounted to a moving cart, which is driven from array to array in a solar farm. The image processing algorithm segments the solar panels from the background in real time, with only the height of the array (specified as the number of rows of panels in the array) being given as prior information to aid in the segmentation process. In order to "count" the number the panels within any given array, frame-to frame panel association is established using optical flow. Local anomalies in a single panel such as hotspots and cracks will be immediately detected and labeled as soon as the panel is recognized in the field of view. After the data from an entire array is collected, hot panels are detected using DBSCAN clustering. On real-world test data containing over 12,000 solar panels, over 98% of all panels are recognized and correctly counted, with 92% of all types of defects being identified by the system.

  12. Elucidating Mechanisms of Molecular Recognition Between Human Argonaute and miRNA Using Computational Approaches

    KAUST Repository

    Jiang, Hanlun

    2016-12-06

    MicroRNA (miRNA) and Argonaute (AGO) protein together form the RNA-induced silencing complex (RISC) that plays an essential role in the regulation of gene expression. Elucidating the underlying mechanism of AGO-miRNA recognition is thus of great importance not only for the in-depth understanding of miRNA function but also for inspiring new drugs targeting miRNAs. In this chapter we introduce a combined computational approach of molecular dynamics (MD) simulations, Markov state models (MSMs), and protein-RNA docking to investigate AGO-miRNA recognition. Constructed from MD simulations, MSMs can elucidate the conformational dynamics of AGO at biologically relevant timescales. Protein-RNA docking can then efficiently identify the AGO conformations that are geometrically accessible to miRNA. Using our recent work on human AGO2 as an example, we explain the rationale and the workflow of our method in details. This combined approach holds great promise to complement experiments in unraveling the mechanisms of molecular recognition between large, flexible, and complex biomolecules.

  13. A computer model of auditory efferent suppression: implications for the recognition of speech in noise.

    Science.gov (United States)

    Brown, Guy J; Ferry, Robert T; Meddis, Ray

    2010-02-01

    The neural mechanisms underlying the ability of human listeners to recognize speech in the presence of background noise are still imperfectly understood. However, there is mounting evidence that the medial olivocochlear system plays an important role, via efferents that exert a suppressive effect on the response of the basilar membrane. The current paper presents a computer modeling study that investigates the possible role of this activity on speech intelligibility in noise. A model of auditory efferent processing [Ferry, R. T., and Meddis, R. (2007). J. Acoust. Soc. Am. 122, 3519-3526] is used to provide acoustic features for a statistical automatic speech recognition system, thus allowing the effects of efferent activity on speech intelligibility to be quantified. Performance of the "basic" model (without efferent activity) on a connected digit recognition task is good when the speech is uncorrupted by noise but falls when noise is present. However, recognition performance is much improved when efferent activity is applied. Furthermore, optimal performance is obtained when the amount of efferent activity is proportional to the noise level. The results obtained are consistent with the suggestion that efferent suppression causes a "release from adaptation" in the auditory-nerve response to noisy speech, which enhances its intelligibility.

  14. Human activity recognition and prediction

    CERN Document Server

    2016-01-01

    This book provides a unique view of human activity recognition, especially fine-grained human activity structure learning, human-interaction recognition, RGB-D data based action recognition, temporal decomposition, and causality learning in unconstrained human activity videos. The techniques discussed give readers tools that provide a significant improvement over existing methodologies of video content understanding by taking advantage of activity recognition. It links multiple popular research fields in computer vision, machine learning, human-centered computing, human-computer interaction, image classification, and pattern recognition. In addition, the book includes several key chapters covering multiple emerging topics in the field. Contributed by top experts and practitioners, the chapters present key topics from different angles and blend both methodology and application, composing a solid overview of the human activity recognition techniques. .

  15. The study on the method of image recognition and processing for digital nuclear signals

    International Nuclear Information System (INIS)

    Wang Dongyang; Zhang Ruanyu; Wang Peng; Yan Yangyang; Hao Dejian

    2012-01-01

    Since there are many limits in the method of the traditional DSP system, a new method of digital nuclear signal processing based on the digital image recognition is presented in this paper. This method converts the time-series digital nuclear signal into the pulse image with adjustable pixels. A new principle and method have been taken to develop the SNR of the digital nuclear signal with the theory and method of the digital image processing. A method called ISC is presented, by which it is convenient to extract the template parameters. (authors)

  16. The recognition of extended targets - SAR images for level and hilly terrain

    Science.gov (United States)

    Stiles, J. A.; Frost, V. S.; Holtzman, J. C.; Shanmugam, K. S.

    1982-01-01

    Radar image simulation techniques are used to determine the character of SAR images of area-extensive targets, for the cases of flat underlying terrain and of moderate relief, with a view to the severity of elevation change effects on the detection and recognition of boundaries and shapes. The experiment, which demonstrated these effects on shapes and boundaries, was performed in order to establish Seasat-A SAR performance. Attention is given to the geometry/propagation effects in range perspective imaging that must be known for information extraction.

  17. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    Science.gov (United States)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  18. Renal imaging diagnosis by computed tomography

    International Nuclear Information System (INIS)

    Nishitani, Hiromu

    1984-01-01

    The sizes of the kidneys of 96 persons without known renal diseases were measured using computed tomography. The average renal length consisted of 10 transverse sections, each 10 mm thick, with a standard deviation of 1 such section. The mean renal width was 61 +- 6.8 mm on the left, and 64 +- 6.4 mm on the right. The mean renal thickness was 51 +- 6.1 mm on the left, and 49 +- 6.9 mm on the right. The renal parenchyma averaged 14 +- 2.2 mm in thickness, regardless of side or sex. Measurement errors were estimated to be approximately 10 percent. There were no significant differences in renal length according to CT and angiography. Renal measurements determined by CT are useful in predicting vital kidney sizes. The CT findings among 114 patients with various renal diseases were compared with results of their excretory urographic and/or angiographic studies. In nearly all instances, CT was superior to excretory urography in detecting renal diseases. It was unnecessary to confirm renal abnormalities detected by CT using excretory urography. CT compared favorably with angiography in the definitive diagnostic imaging and staging of renal cell carcinomas. CT is destined to play an important role in the diagnostic imaging of renal diseases. (author)

  19. Advanced proton imaging in computed tomography.

    Science.gov (United States)

    Mattiazzo, S; Bisello, D; Giubilato, P; Pantano, D; Pozzobon, N; Snoeys, W; Wyss, J

    2015-09-01

    In recent years the use of hadrons for cancer radiation treatment has grown in importance, and many facilities are currently operational or under construction worldwide. To fully exploit the therapeutic advantages offered by hadron therapy, precise body imaging for accurate beam delivery is decisive. Proton computed tomography (pCT) scanners, currently in their R&D phase, provide the ultimate 3D imaging for hadrons treatment guidance. A key component of a pCT scanner is the detector used to track the protons, which has great impact on the scanner performances and ultimately limits its maximum speed. In this article, a novel proton-tracking detector was presented that would have higher scanning speed, better spatial resolution and lower material budget with respect to present state-of-the-art detectors, leading to enhanced performances. This advancement in performances is achieved by employing the very latest development in monolithic active pixel detectors (to build high granularity, low material budget, large area silicon detectors) and a completely new proprietary architecture (to effectively compress the data). © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Mobile Robot Aided Silhouette Imaging and Robust Body Pose Recognition for Elderly-Fall Detection

    Directory of Open Access Journals (Sweden)

    Tong Liu

    2014-03-01

    Full Text Available This article introduces a mobile infrared silhouette imaging and sparse representation-based pose recognition for building an elderly-fall detection system. The proposed imaging paradigm exploits the novel use of the pyroelectric infrared (PIR sensor in pursuit of body silhouette imaging. A mobile robot carrying a vertical column of multi-PIR detectors is organized for the silhouette acquisition. Then we express the fall detection problem in silhouette image-based pose recognition. For the pose recognition, we use a robust sparse representation-based method for fall detection. The normal and fall poses are sparsely represented in the basis space spanned by the combinations of a pose training template and an error template. The ℓ1 norm minimizations with linear programming (LP and orthogonal matching pursuit (OMP are used for finding the sparsest solution, and the entity with the largest amplitude encodes the class of the testing sample. The application of the proposed sensing paradigm to fall detection is addressed in the context of three scenarios, including: ideal non-obstruction, simulated random pixel obstruction and simulated random block obstruction. Experimental studies are conducted to validate the effectiveness of the proposed method for nursing and homeland healthcare.

  1. Face recognition via edge-based Gabor feature representation for plastic surgery-altered images

    Science.gov (United States)

    Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.

    2014-12-01

    Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.

  2. Color descriptors for object category recognition

    NARCIS (Netherlands)

    van de Sande, K.E.A.; Gevers, T.; Snoek, C.G.M.

    2008-01-01

    Category recognition is important to access visual information on the level of objects. A common approach is to compute image descriptors first and then to apply machine learning to achieve category recognition from annotated examples. As a consequence, the choice of image descriptors is of great

  3. Computational morphology of the lung and its virtual imaging

    International Nuclear Information System (INIS)

    Kitaoka, Hiroko

    2002-01-01

    The author proposes an entirely new approach called 'virtual imaging' of an organ based on 'computational morphology'. Computational morphology describes mathematically design as principles of an organ structure to generate the organ model via computer, which can be called virtual organ. Virtual imaging simulates image data using the virtual organ. The virtual organ is divided into cubic voxels, and the CT value or other intensity value for each voxel is calculated according to the tissue properties within the voxel. The validity of the model is examined by comparing virtual images with clinical images. Computational image analysis methods can be developed based on validated models. In this paper, computational anatomy of the lung and its virtual X-ray imaging are introduced

  4. Image-Processing Software For A Hypercube Computer

    Science.gov (United States)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  5. Computing Homology Group Generators of Images Using Irregular Graph Pyramids

    OpenAIRE

    Peltier , Samuel; Ion , Adrian; Haxhimusa , Yll; Kropatsch , Walter; Damiand , Guillaume

    2007-01-01

    International audience; We introduce a method for computing homology groups and their generators of a 2D image, using a hierarchical structure i.e. irregular graph pyramid. Starting from an image, a hierarchy of the image is built, by two operations that preserve homology of each region. Instead of computing homology generators in the base where the number of entities (cells) is large, we first reduce the number of cells by a graph pyramid. Then homology generators are computed efficiently on...

  6. Analysis of image content recognition algorithm based on sparse coding and machine learning

    Science.gov (United States)

    Xiao, Yu

    2017-03-01

    This paper presents an image classification algorithm based on spatial sparse coding model and random forest. Firstly, SIFT feature extraction of the image; and then use the sparse encoding theory to generate visual vocabulary based on SIFT features, and using the visual vocabulary of SIFT features into a sparse vector; through the combination of regional integration and spatial sparse vector, the sparse vector gets a fixed dimension is used to represent the image; at last random forest classifier for image sparse vectors for training and testing, using the experimental data set for standard test Caltech-101 and Scene-15. The experimental results show that the proposed algorithm can effectively represent the features of the image and improve the classification accuracy. In this paper, we propose an innovative image recognition algorithm based on image segmentation, sparse coding and multi instance learning. This algorithm introduces the concept of multi instance learning, the image as a multi instance bag, sparse feature transformation by SIFT images as instances, sparse encoding model generation visual vocabulary as the feature space is mapped to the feature space through the statistics on the number of instances in bags, and then use the 1-norm SVM to classify images and generate sample weights to select important image features.

  7. Dentomaxillofacial imaging with computed-radiography techniques: a preliminary study

    Science.gov (United States)

    Shaw, Chris C.; Kapa, Stanley F.; Furkart, Audrey J.; Gur, David

    1993-09-01

    A preliminary study was conducted to investigate the feasibility of using high resolution computed radiography techniques for dentomaxillofacial imaging. Storage phosphors were cut into various sizes and used with an experimental laser scanning reader for three different imaging procedures: intraoral, cephalometric and panoramic. Both phantom and patient images were obtained for comparing the computed radiography technique with the conventional screen/film or dental film techniques. It has been found that current computed radiography techniques are largely adequate for cephalometric and panoramic imaging but need further improvement on their spatial resolution capability for intraoral imaging. In this paper, the methods of applying the computer radiography techniques to dentomaxillofacial imaging are described and discussed. Images of phantoms, resolution bar patterns and patients are presented and compared. Issues on image quality and cost are discussed.

  8. Automatic analysis of digitized TV-images by a computer-driven optical microscope

    International Nuclear Information System (INIS)

    Rosa, G.; Di Bartolomeo, A.; Grella, G.; Romano, G.

    1997-01-01

    New methods of image analysis and three-dimensional pattern recognition were developed in order to perform the automatic scan of nuclear emulsion pellicles. An optical microscope, with a motorized stage, was equipped with a CCD camera and an image digitizer, and interfaced to a personal computer. Selected software routines inspired the design of a dedicated hardware processor. Fast operation, high efficiency and accuracy were achieved. First applications to high-energy physics experiments are reported. Further improvements are in progress, based on a high-resolution fast CCD camera and on programmable digital signal processors. Applications to other research fields are envisaged. (orig.)

  9. Medical Imaging Informatics: Towards a Personalized Computational Patient.

    Science.gov (United States)

    Ayache, N

    2016-05-20

    Medical Imaging Informatics has become a fast evolving discipline at the crossing of Informatics, Computational Sciences, and Medicine that is profoundly changing medical practices, for the patients' benefit.

  10. Legal issues of computer imaging in plastic surgery: a primer.

    Science.gov (United States)

    Chávez, A E; Dagum, P; Koch, R J; Newman, J P

    1997-11-01

    Although plastic surgeons are increasingly incorporating computer imaging techniques into their practices, many fear the possibility of legally binding themselves to achieve surgical results identical to those reflected in computer images. Computer imaging allows surgeons to manipulate digital photographs of patients to project possible surgical outcomes. Some of the many benefits imaging techniques pose include improving doctor-patient communication, facilitating the education and training of residents, and reducing administrative and storage costs. Despite the many advantages computer imaging systems offer, however, surgeons understandably worry that imaging systems expose them to immense legal liability. The possible exploitation of computer imaging by novice surgeons as a marketing tool, coupled with the lack of consensus regarding the treatment of computer images, adds to the concern of surgeons. A careful analysis of the law, however, reveals that surgeons who use computer imaging carefully and conservatively, and adopt a few simple precautions, substantially reduce their vulnerability to legal claims. In particular, surgeons face possible claims of implied contract, failure to instruct, and malpractice from their use or failure to use computer imaging. Nevertheless, legal and practical obstacles frustrate each of those causes of actions. Moreover, surgeons who incorporate a few simple safeguards into their practice may further reduce their legal susceptibility.

  11. Audio-Visual Speech Recognition Using Lip Information Extracted from Side-Face Images

    Directory of Open Access Journals (Sweden)

    Koji Iwano

    2007-03-01

    Full Text Available This paper proposes an audio-visual speech recognition method using lip information extracted from side-face images as an attempt to increase noise robustness in mobile environments. Our proposed method assumes that lip images can be captured using a small camera installed in a handset. Two different kinds of lip features, lip-contour geometric features and lip-motion velocity features, are used individually or jointly, in combination with audio features. Phoneme HMMs modeling the audio and visual features are built based on the multistream HMM technique. Experiments conducted using Japanese connected digit speech contaminated with white noise in various SNR conditions show effectiveness of the proposed method. Recognition accuracy is improved by using the visual information in all SNR conditions. These visual features were confirmed to be effective even when the audio HMM was adapted to noise by the MLLR method.

  12. Chinese Traffic Panels Detection and Recognition From Street-Level Images

    Directory of Open Access Journals (Sweden)

    Chen Yajie

    2016-01-01

    Full Text Available Traffic sign detection and recognition has been the active research topic due to its potential applications in intelligent transportation. However, detection and recognition of traffic panels containing much information, still remains to be a challenging problem. This paper proposes a method to detect and recognize traffic panels from street-level images in the urban scenes and to analyze the information on them. The traffic panels are detected based on histogram of oriented gradient and linear support vector machines. The text strings and symbols on traffic panels are segmented using connected component analysis method. Finally, the symbols on traffic panels are recognized by means of a model named bag of spatial visual words. Experimental results on images from Baidu Panorama Map prove the effectiveness of the proposed method.

  13. A rapid automatic analyzer and its methodology for effective bentonite content based on image recognition technology

    Directory of Open Access Journals (Sweden)

    Wei Long

    2016-09-01

    Full Text Available Fast and accurate determination of effective bentonite content in used clay bonded sand is very important for selecting the correct mixing ratio and mixing process to obtain high-performance molding sand. Currently, the effective bentonite content is determined by testing the ethylene blue absorbed in used clay bonded sand, which is usually a manual operation with some disadvantages including complicated process, long testing time and low accuracy. A rapid automatic analyzer of the effective bentonite content in used clay bonded sand was developed based on image recognition technology. The instrument consists of auto stirring, auto liquid removal, auto titration, step-rotation and image acquisition components, and processor. The principle of the image recognition method is first to decompose the color images into three-channel gray images based on the photosensitive degree difference of the light blue and dark blue in the three channels of red, green and blue, then to make the gray values subtraction calculation and gray level transformation of the gray images, and finally, to extract the outer circle light blue halo and the inner circle blue spot and calculate their area ratio. The titration process can be judged to reach the end-point while the area ratio is higher than the setting value.

  14. Architecture of top down, parallel pattern recognition system TOPS and its application to the MR head images

    International Nuclear Information System (INIS)

    Matsunoshita, Jun-ichi; Akamatsu, Shigeo; Yamamoto, Shinji.

    1993-01-01

    This paper describes about the system architecture of a new image recognition system TOPS (top-down parallel pattern recognition system), and its application to the automatic extraction of brain organs (cerebrum, cerebellum, brain stem) from 3D-MRI images. Main concepts of TOPS are as follows: (1) TOPS is the top-down type recognition system, which allows parallel models in each level of hierarchy structure. (2) TOPS allows parallel image processing algorithms for one purpose (for example, for extraction of one special organ). This results in multiple candidates for one purpose, and judgment to get unique solution for it will be made at upper level of hierarchy structure. (author)

  15. 16th International Conference on Medical Image Computing and Computer Assisted Intervention

    CERN Document Server

    Klinder, Tobias; Li, Shuo

    2014-01-01

    This book contains the full papers presented at the MICCAI 2013 workshop Computational Methods and Clinical Applications for Spine Imaging. The workshop brought together researchers representing several fields, such as Biomechanics, Engineering, Medicine, Mathematics, Physics and Statistic. The works included in this book present and discuss new trends in those fields, using several methods and techniques in order to address more efficiently different and timely applications involving signal and image acquisition, image processing and analysis, image segmentation, image registration and fusion, computer simulation, image based modelling, simulation and surgical planning, image guided robot assisted surgical and image based diagnosis.

  16. A Compact Graph Model of Handwritten Images: Integration into Authentification and Recognition

    OpenAIRE

    Popel, Denis V.

    2002-01-01

    A novel algorithm for creating a mathematical model of curved shapes is introduced. The core of the algorithm is based on building a graph representation of the contoured image, which occupies less storage space than produced by raster compression techniques. Different advanced applications of the mathematical model are discussed: recognition of handwritten characters and verification of handwritten text and signatures for authentification purposes. Reducing the storage requirements due to th...

  17. Computer-implemented land use classification with pattern recognition software and ERTS digital data. [Mississippi coastal plains

    Science.gov (United States)

    Joyce, A. T.

    1974-01-01

    Significant progress has been made in the classification of surface conditions (land uses) with computer-implemented techniques based on the use of ERTS digital data and pattern recognition software. The supervised technique presently used at the NASA Earth Resources Laboratory is based on maximum likelihood ratioing with a digital table look-up approach to classification. After classification, colors are assigned to the various surface conditions (land uses) classified, and the color-coded classification is film recorded on either positive or negative 9 1/2 in. film at the scale desired. Prints of the film strips are then mosaicked and photographed to produce a land use map in the format desired. Computer extraction of statistical information is performed to show the extent of each surface condition (land use) within any given land unit that can be identified in the image. Evaluations of the product indicate that classification accuracy is well within the limits for use by land resource managers and administrators. Classifications performed with digital data acquired during different seasons indicate that the combination of two or more classifications offer even better accuracy.

  18. Quantitative computed tomography (QCT) as a radiology reporting tool by using optical character recognition (OCR) and macro program.

    Science.gov (United States)

    Lee, Young Han; Song, Ho-Taek; Suh, Jin-Suck

    2012-12-01

    The objectives are (1) to introduce a new concept of making a quantitative computed tomography (QCT) reporting system by using optical character recognition (OCR) and macro program and (2) to illustrate the practical usages of the QCT reporting system in radiology reading environment. This reporting system was created as a development tool by using an open-source OCR software and an open-source macro program. The main module was designed for OCR to report QCT images in radiology reading process. The principal processes are as follows: (1) to save a QCT report as a graphic file, (2) to recognize the characters from an image as a text, (3) to extract the T scores from the text, (4) to perform error correction, (5) to reformat the values into QCT radiology reporting template, and (6) to paste the reports into the electronic medical record (EMR) or picture archiving and communicating system (PACS). The accuracy test of OCR was performed on randomly selected QCTs. QCT as a radiology reporting tool successfully acted as OCR of QCT. The diagnosis of normal, osteopenia, or osteoporosis is also determined. Error correction of OCR is done with AutoHotkey-coded module. The results of T scores of femoral neck and lumbar vertebrae had an accuracy of 100 and 95.4 %, respectively. A convenient QCT reporting system could be established by utilizing open-source OCR software and open-source macro program. This method can be easily adapted for other QCT applications and PACS/EMR.

  19. Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification.

    Science.gov (United States)

    Sladojevic, Srdjan; Arsenovic, Marko; Anderla, Andras; Culibrk, Dubravko; Stefanovic, Darko

    2016-01-01

    The latest generation of convolutional neural networks (CNNs) has achieved impressive results in the field of image classification. This paper is concerned with a new approach to the development of plant disease recognition model, based on leaf image classification, by the use of deep convolutional networks. Novel way of training and the methodology used facilitate a quick and easy system implementation in practice. The developed model is able to recognize 13 different types of plant diseases out of healthy leaves, with the ability to distinguish plant leaves from their surroundings. According to our knowledge, this method for plant disease recognition has been proposed for the first time. All essential steps required for implementing this disease recognition model are fully described throughout the paper, starting from gathering images in order to create a database, assessed by agricultural experts. Caffe, a deep learning framework developed by Berkley Vision and Learning Centre, was used to perform the deep CNN training. The experimental results on the developed model achieved precision between 91% and 98%, for separate class tests, on average 96.3%.

  20. Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification

    Science.gov (United States)

    Sladojevic, Srdjan; Arsenovic, Marko; Culibrk, Dubravko; Stefanovic, Darko

    2016-01-01

    The latest generation of convolutional neural networks (CNNs) has achieved impressive results in the field of image classification. This paper is concerned with a new approach to the development of plant disease recognition model, based on leaf image classification, by the use of deep convolutional networks. Novel way of training and the methodology used facilitate a quick and easy system implementation in practice. The developed model is able to recognize 13 different types of plant diseases out of healthy leaves, with the ability to distinguish plant leaves from their surroundings. According to our knowledge, this method for plant disease recognition has been proposed for the first time. All essential steps required for implementing this disease recognition model are fully described throughout the paper, starting from gathering images in order to create a database, assessed by agricultural experts. Caffe, a deep learning framework developed by Berkley Vision and Learning Centre, was used to perform the deep CNN training. The experimental results on the developed model achieved precision between 91% and 98%, for separate class tests, on average 96.3%. PMID:27418923

  1. Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification

    Directory of Open Access Journals (Sweden)

    Srdjan Sladojevic

    2016-01-01

    Full Text Available The latest generation of convolutional neural networks (CNNs has achieved impressive results in the field of image classification. This paper is concerned with a new approach to the development of plant disease recognition model, based on leaf image classification, by the use of deep convolutional networks. Novel way of training and the methodology used facilitate a quick and easy system implementation in practice. The developed model is able to recognize 13 different types of plant diseases out of healthy leaves, with the ability to distinguish plant leaves from their surroundings. According to our knowledge, this method for plant disease recognition has been proposed for the first time. All essential steps required for implementing this disease recognition model are fully described throughout the paper, starting from gathering images in order to create a database, assessed by agricultural experts. Caffe, a deep learning framework developed by Berkley Vision and Learning Centre, was used to perform the deep CNN training. The experimental results on the developed model achieved precision between 91% and 98%, for separate class tests, on average 96.3%.

  2. Simulated Prosthetic Vision: The Benefits of Computer-Based Object Recognition and Localization.

    Science.gov (United States)

    Macé, Marc J-M; Guivarch, Valérian; Denis, Grégoire; Jouffrais, Christophe

    2015-07-01

    Clinical trials with blind patients implanted with a visual neuroprosthesis showed that even the simplest tasks were difficult to perform with the limited vision restored with current implants. Simulated prosthetic vision (SPV) is a powerful tool to investigate the putative functions of the upcoming generations of visual neuroprostheses. Recent studies based on SPV showed that several generations of implants will be required before usable vision is restored. However, none of these studies relied on advanced image processing. High-level image processing could significantly reduce the amount of information required to perform visual tasks and help restore visuomotor behaviors, even with current low-resolution implants. In this study, we simulated a prosthetic vision device based on object localization in the scene. We evaluated the usability of this device for object recognition, localization, and reaching. We showed that a very low number of electrodes (e.g., nine) are sufficient to restore visually guided reaching movements with fair timing (10 s) and high accuracy. In addition, performance, both in terms of accuracy and speed, was comparable with 9 and 100 electrodes. Extraction of high level information (object recognition and localization) from video images could drastically enhance the usability of current visual neuroprosthesis. We suggest that this method-that is, localization of targets of interest in the scene-may restore various visuomotor behaviors. This method could prove functional on current low-resolution implants. The main limitation resides in the reliability of the vision algorithms, which are improving rapidly. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  3. Advances in computer imaging/applications in facial plastic surgery.

    Science.gov (United States)

    Papel, I D; Jiannetto, D F

    1999-01-01

    Rapidly progressing computer technology, ever-increasing expectations of patients, and a confusing medicolegal environment requires a clarification of the role of computer imaging/applications. Advances in computer technology and its applications are reviewed. A brief historical discussion is included for perspective. Improvements in both hardware and software with the advent of digital imaging have allowed great increases in speed and accuracy in patient imaging. This facilitates doctor-patient communication and possibly realistic patient expectations. Patients seeking cosmetic surgery now often expect preoperative imaging. Although society in general has become more litigious, a literature search up to 1998 reveals no lawsuits directly involving computer imaging. It appears that conservative utilization of computer imaging by the facial plastic surgeon may actually reduce liability and promote communication. Recent advances have significantly enhanced the value of computer imaging in the practice of facial plastic surgery. These technological advances in computer imaging appear to contribute a useful technique for the practice of facial plastic surgery. Inclusion of computer imaging should be given serious consideration as an adjunct to clinical practice.

  4. Development of a lateral migration radiography image generation and object recognition system

    Science.gov (United States)

    Wehlburg, Joseph Cornelis

    Compton Backscatter Imaging (CBI) has always been impeded by inefficient sensing of information carrying photons, and extensive structured noise due to object surface features and heterogeneity. In this research project, a new variant of CBI, which substantially resolves both impedimental is suggested, developed and rigorously tested by application to a difficult imaging problem. The new approach is termed Lateral Migration Radiography (LMR) which aptly describes the specific photon history process giving rise to resulting image contrast. The photons employed in this research are conventionally generated x rays. A pencil x-ray beam with a typical filtered-bremsstrahlung photon energy spectrum is perpendicularly incident upon, and systematically rastered over, the object to be imaged. Efficient sensing of information-carrying photons is achieved by employing large-area detectors with sensitive planes perpendicular to the incident beam. A geometric array of a group of such detectors along with varying degrees of detector collimation to discriminate singly-scattered from multiple-scattered, detected x rays is developed. The direct output of the detector-array components is algebraically combined to eliminate image cloaking by surface features and heterogeneity. Image contrast is generated by the variation of x-ray interaction probabilities in the internal details relative to the surrounding material. These major improvements to conventional CBI have allowed the detection of internals with clarity such that recognition of the internal features via the image details is possible in cases where ordinary CBI can not even detect the presence of the internal structure. The test application is the detection and recognition of all-plastic antitank landmines buried in soil at depths of up to three inches. In the military application of clearing 12 inch diameter mines from 14-foot-wide tank-lanes, the spatial resolution requirement of one inch and the speed of 3 to 5 mph over

  5. Computer versus paper system for recognition and management of sepsis in surgical intensive care.

    Science.gov (United States)

    Croft, Chasen A; Moore, Frederick A; Efron, Philip A; Marker, Peggy S; Gabrielli, Andrea; Westhoff, Lynn S; Lottenberg, Lawrence; Jordan, Janeen; Klink, Victoria; Sailors, R Matthew; McKinley, Bruce A

    2014-02-01

    A system to provide surveillance, diagnosis, and protocolized management of surgical intensive care unit (SICU) sepsis was undertaken as a performance improvement project. A system for sepsis management was implemented for SICU patients using paper followed by a computerized system. The hypothesis was that the computerized system would be associated with improved process and outcomes. A system was designed to provide early recognition and guide patient-specific management of sepsis including (1) modified early warning signs-sepsis recognition score (MEWS-SRS; summative point score of ranges of vital signs, mental status, white blood cell count; after every 4 hours) by bedside nurse; (2) suspected site assessment (vascular access, lung, abdomen, urinary tract, soft tissue, other) at bedside by physician or extender; (3) sepsis management protocol (replicable, point-of-care decisions) at bedside by nurse, physician, and extender. The system was implemented first using paper and then a computerized system. Sepsis severity was defined using standard criteria. In January to May 2012, a paper system was used to manage 77 consecutive sepsis encounters (3.9 ± 0.5 cases per week) in 65 patients (77% male; age, 53 ± 2 years). In June to December 2012, a computerized system was used to manage 132 consecutive sepsis encounters (4.4 ± 0.4 cases per week) in 119 patients (63% male; age, 58 ± 2 years). MEWS-SRS elicited 683 site assessments, and 201 had sepsis diagnosis and protocol management. The predominant site of infection was abdomen (paper, 58%; computer, 53%). Recognition of early sepsis tended to occur more using the computerized system (paper, 23%; computer, 35%). Hospital mortality rate for surgical ICU sepsis (paper, 20%; computer, 14%) was less with the computerized system. A computerized sepsis management system improves care process and outcome. Early sepsis is recognized and managed with greater frequency compared with severe sepsis or septic shock. The system

  6. Computational chemical imaging for cardiovascular pathology: chemical microscopic imaging accurately determines cardiac transplant rejection.

    Directory of Open Access Journals (Sweden)

    Saumya Tiwari

    Full Text Available Rejection is a common problem after cardiac transplants leading to significant number of adverse events and deaths, particularly in the first year of transplantation. The gold standard to identify rejection is endomyocardial biopsy. This technique is complex, cumbersome and requires a lot of expertise in the correct interpretation of stained biopsy sections. Traditional histopathology cannot be used actively or quickly during cardiac interventions or surgery. Our objective was to develop a stain-less approach using an emerging technology, Fourier transform infrared (FT-IR spectroscopic imaging to identify different components of cardiac tissue by their chemical and molecular basis aided by computer recognition, rather than by visual examination using optical microscopy. We studied this technique in assessment of cardiac transplant rejection to evaluate efficacy in an example of complex cardiovascular pathology. We recorded data from human cardiac transplant patients' biopsies, used a Bayesian classification protocol and developed a visualization scheme to observe chemical differences without the need of stains or human supervision. Using receiver operating characteristic curves, we observed probabilities of detection greater than 95% for four out of five histological classes at 10% probability of false alarm at the cellular level while correctly identifying samples with the hallmarks of the immune response in all cases. The efficacy of manual examination can be significantly increased by observing the inherent biochemical changes in tissues, which enables us to achieve greater diagnostic confidence in an automated, label-free manner. We developed a computational pathology system that gives high contrast images and seems superior to traditional staining procedures. This study is a prelude to the development of real time in situ imaging systems, which can assist interventionists and surgeons actively during procedures.

  7. Computational chemical imaging for cardiovascular pathology: chemical microscopic imaging accurately determines cardiac transplant rejection.

    Science.gov (United States)

    Tiwari, Saumya; Reddy, Vijaya B; Bhargava, Rohit; Raman, Jaishankar

    2015-01-01

    Rejection is a common problem after cardiac transplants leading to significant number of adverse events and deaths, particularly in the first year of transplantation. The gold standard to identify rejection is endomyocardial biopsy. This technique is complex, cumbersome and requires a lot of expertise in the correct interpretation of stained biopsy sections. Traditional histopathology cannot be used actively or quickly during cardiac interventions or surgery. Our objective was to develop a stain-less approach using an emerging technology, Fourier transform infrared (FT-IR) spectroscopic imaging to identify different components of cardiac tissue by their chemical and molecular basis aided by computer recognition, rather than by visual examination using optical microscopy. We studied this technique in assessment of cardiac transplant rejection to evaluate efficacy in an example of complex cardiovascular pathology. We recorded data from human cardiac transplant patients' biopsies, used a Bayesian classification protocol and developed a visualization scheme to observe chemical differences without the need of stains or human supervision. Using receiver operating characteristic curves, we observed probabilities of detection greater than 95% for four out of five histological classes at 10% probability of false alarm at the cellular level while correctly identifying samples with the hallmarks of the immune response in all cases. The efficacy of manual examination can be significantly increased by observing the inherent biochemical changes in tissues, which enables us to achieve greater diagnostic confidence in an automated, label-free manner. We developed a computational pathology system that gives high contrast images and seems superior to traditional staining procedures. This study is a prelude to the development of real time in situ imaging systems, which can assist interventionists and surgeons actively during procedures.

  8. Automated segmentation and recognition of abdominal wall muscles in X-ray torso CT images and its application in abdominal CAD

    International Nuclear Information System (INIS)

    Zhou, X.; Kamiya, N.; Hara, T.; Fujita, H.; Chen, H.; Yokoyama, R.; Hoshi, H.

    2007-01-01

    The information of abdominal wall is very important for the planning of surgical operation and abdominal organ recognition. In research fields of computer assisted radiology and surgery and computer-aided diagnosis, the segmentation and recognition of the abdominal wall muscles in CT images is a necessary pre-processing step. Due to the complexity of the abdominal wall structure and indistinctive in CT images, the automated segmentation of abdominal wall muscles is a difficult issue and has not been solved completely. We propose an approach to segment the abdominal wall muscles and divide it into three categories (front abdominal muscles including rectus abdominis; left and right side abdominal muscles including external oblique, internal oblique and transversus abdominis muscles) automatically. The approach, first, makes an initial classification of bone, fat, and muscles and organs based on the CT number. Then a layer structure is generated to describe the 3-D anatomical structures of human torso by stretching the torso region onto a thin-plate for easy recognition. The abdominal wall muscles are recognized on the layer structures using the spatial relations to the skeletal structure and CT numbers. Finally, the recognized regions are mapped back to the 3-D CT images using an inverse transformation of the stretching process. This method is applied to 20 cases of torso CT images and evaluations are based on visual comparison of the recognition results and the original CT images by an expert in anatomy. The results show that our approach can segment and recognize abdominal wall muscle regions effectively. (orig.)

  9. Automated segmentation and recognition of abdominal wall muscles in X-ray torso CT images and its application in abdominal CAD

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, X.; Kamiya, N.; Hara, T.; Fujita, H. [Dept. of Intelligent Image Information, Div. of Regeneration and Advanced Medical Sciences, Graduate School of Medicine, Gifu Univ., Gifu (Japan); Chen, H. [Dept. of Anatomy, Graduate School of Medicine, Gifu Univ., Gifu (Japan); Yokoyama, R.; Hoshi, H. [Dept. of Radiology, Gifu Univ. Graduate School of Medicine and Univ. Hospital, Gifu (Japan)

    2007-06-15

    The information of abdominal wall is very important for the planning of surgical operation and abdominal organ recognition. In research fields of computer assisted radiology and surgery and computer-aided diagnosis, the segmentation and recognition of the abdominal wall muscles in CT images is a necessary pre-processing step. Due to the complexity of the abdominal wall structure and indistinctive in CT images, the automated segmentation of abdominal wall muscles is a difficult issue and has not been solved completely. We propose an approach to segment the abdominal wall muscles and divide it into three categories (front abdominal muscles including rectus abdominis; left and right side abdominal muscles including external oblique, internal oblique and transversus abdominis muscles) automatically. The approach, first, makes an initial classification of bone, fat, and muscles and organs based on the CT number. Then a layer structure is generated to describe the 3-D anatomical structures of human torso by stretching the torso region onto a thin-plate for easy recognition. The abdominal wall muscles are recognized on the layer structures using the spatial relations to the skeletal structure and CT numbers. Finally, the recognized regions are mapped back to the 3-D CT images using an inverse transformation of the stretching process. This method is applied to 20 cases of torso CT images and evaluations are based on visual comparison of the recognition results and the original CT images by an expert in anatomy. The results show that our approach can segment and recognize abdominal wall muscle regions effectively. (orig.)

  10. The location and recognition of anti-counterfeiting code image with complex background

    Science.gov (United States)

    Ni, Jing; Liu, Quan; Lou, Ping; Han, Ping

    2017-07-01

    The order of cigarette market is a key issue in the tobacco business system. The anti-counterfeiting code, as a kind of effective anti-counterfeiting technology, can identify counterfeit goods, and effectively maintain the normal order of market and consumers' rights and interests. There are complex backgrounds, light interference and other problems in the anti-counterfeiting code images obtained by the tobacco recognizer. To solve these problems, the paper proposes a locating method based on Susan operator, combined with sliding window and line scanning,. In order to reduce the interference of background and noise, we extract the red component of the image and convert the color image into gray image. For the confusing characters, recognition results correction based on the template matching method has been adopted to improve the recognition rate. In this method, the anti-counterfeiting code can be located and recognized correctly in the image with complex background. The experiment results show the effectiveness and feasibility of the approach.

  11. Performance Evaluation of Machine Learning Algorithms for Urban Pattern Recognition from Multi-spectral Satellite Images

    Directory of Open Access Journals (Sweden)

    Marc Wieland

    2014-03-01

    Full Text Available In this study, a classification and performance evaluation framework for the recognition of urban patterns in medium (Landsat ETM, TM and MSS and very high resolution (WorldView-2, Quickbird, Ikonos multi-spectral satellite images is presented. The study aims at exploring the potential of machine learning algorithms in the context of an object-based image analysis and to thoroughly test the algorithm’s performance under varying conditions to optimize their usage for urban pattern recognition tasks. Four classification algorithms, Normal Bayes, K Nearest Neighbors, Random Trees and Support Vector Machines, which represent different concepts in machine learning (probabilistic, nearest neighbor, tree-based, function-based, have been selected and implemented on a free and open-source basis. Particular focus is given to assess the generalization ability of machine learning algorithms and the transferability of trained learning machines between different image types and image scenes. Moreover, the influence of the number and choice of training data, the influence of the size and composition of the feature vector and the effect of image segmentation on the classification accuracy is evaluated.

  12. Blue Laser Imaging-Bright Improves Endoscopic Recognition of Superficial Esophageal Squamous Cell Carcinoma

    Directory of Open Access Journals (Sweden)

    Akira Tomie

    2016-01-01

    Full Text Available Background/Aims. The aim of this study was to evaluate the endoscopic recognition of esophageal squamous cell carcinoma (ESCC using four different methods (Olympus white light imaging (O-WLI, Fujifilm white light imaging (F-WLI, narrow band imaging (NBI, and blue laser imaging- (BLI- bright. Methods. We retrospectively analyzed 25 superficial ESCCs that had been examined using the four different methods. Subjective evaluation was provided by three endoscopists as a ranking score (RS of each image based on the ease of detection of the cancerous area. For the objective evaluation we calculated the color difference scores (CDS between the cancerous and noncancerous areas with each of the four methods. Results. There was no difference between the mean RS of O-WLI and F-WLI. The mean RS of NBI was significantly higher than that of O-WLI and that of BLI-bright was significantly higher than that of F-WLI. Moreover, the mean RS of BLI-bright was significantly higher than that of NBI. Furthermore, in the objective evaluation, the mean CDS of BLI-bright was significantly higher than that of O-WLI, F-WLI, and NBI. Conclusion. The recognition of superficial ESCC using BLI-bright was more efficacious than the other methods tested both subjectively and objectively.

  13. Computational Recognition of RNA Splice Sites by Exact Algorithms for the Quadratic Traveling Salesman Problem

    Directory of Open Access Journals (Sweden)

    Anja Fischer

    2015-06-01

    Full Text Available One fundamental problem of bioinformatics is the computational recognition of DNA and RNA binding sites. Given a set of short DNA or RNA sequences of equal length such as transcription factor binding sites or RNA splice sites, the task is to learn a pattern from this set that allows the recognition of similar sites in another set of DNA or RNA sequences. Permuted Markov (PM models and permuted variable length Markov (PVLM models are two powerful models for this task, but the problem of finding an optimal PM model or PVLM model is NP-hard. While the problem of finding an optimal PM model or PVLM model of order one is equivalent to the traveling salesman problem (TSP, the problem of finding an optimal PM model or PVLM model of order two is equivalent to the quadratic TSP (QTSP. Several exact algorithms exist for solving the QTSP, but it is unclear if these algorithms are capable of solving QTSP instances resulting from RNA splice sites of at least 150 base pairs in a reasonable time frame. Here, we investigate the performance of three exact algorithms for solving the QTSP for ten datasets of splice acceptor sites and splice donor sites of five different species and find that one of these algorithms is capable of solving QTSP instances of up to 200 base pairs with a running time of less than two days.

  14. A Computationally Efficient Mel-Filter Bank VAD Algorithm for Distributed Speech Recognition Systems

    Directory of Open Access Journals (Sweden)

    Vlaj Damjan

    2005-01-01

    Full Text Available This paper presents a novel computationally efficient voice activity detection (VAD algorithm and emphasizes the importance of such algorithms in distributed speech recognition (DSR systems. When using VAD algorithms in telecommunication systems, the required capacity of the speech transmission channel can be reduced if only the speech parts of the signal are transmitted. A similar objective can be adopted in DSR systems, where the nonspeech parameters are not sent over the transmission channel. A novel approach is proposed for VAD decisions based on mel-filter bank (MFB outputs with the so-called Hangover criterion. Comparative tests are presented between the presented MFB VAD algorithm and three VAD algorithms used in the G.729, G.723.1, and DSR (advanced front-end Standards. These tests were made on the Aurora 2 database, with different signal-to-noise (SNRs ratios. In the speech recognition tests, the proposed MFB VAD outperformed all the three VAD algorithms used in the standards by relative (G.723.1 VAD, by relative (G.729 VAD, and by relative (DSR VAD in all SNRs.

  15. A Computationally Efficient Mel-Filter Bank VAD Algorithm for Distributed Speech Recognition Systems

    Science.gov (United States)

    Vlaj, Damjan; Kotnik, Bojan; Horvat, Bogomir; Kačič, Zdravko

    2005-12-01

    This paper presents a novel computationally efficient voice activity detection (VAD) algorithm and emphasizes the importance of such algorithms in distributed speech recognition (DSR) systems. When using VAD algorithms in telecommunication systems, the required capacity of the speech transmission channel can be reduced if only the speech parts of the signal are transmitted. A similar objective can be adopted in DSR systems, where the nonspeech parameters are not sent over the transmission channel. A novel approach is proposed for VAD decisions based on mel-filter bank (MFB) outputs with the so-called Hangover criterion. Comparative tests are presented between the presented MFB VAD algorithm and three VAD algorithms used in the G.729, G.723.1, and DSR (advanced front-end) Standards. These tests were made on the Aurora 2 database, with different signal-to-noise (SNRs) ratios. In the speech recognition tests, the proposed MFB VAD outperformed all the three VAD algorithms used in the standards by [InlineEquation not available: see fulltext.] relative (G.723.1 VAD), by [InlineEquation not available: see fulltext.] relative (G.729 VAD), and by [InlineEquation not available: see fulltext.] relative (DSR VAD) in all SNRs.

  16. Can the Outputs of LGN Y-Cells Support Emotion Recognition? A Computational Study

    Directory of Open Access Journals (Sweden)

    Andrea De Cesarei

    2015-01-01

    Full Text Available It has been suggested that emotional visual input is processed along both a slower cortical pathway and a faster subcortical pathway which comprises the lateral geniculate nucleus (LGN, the superior colliculus, the pulvinar, and finally the amygdala. However, anatomical as well as functional evidence concerning the subcortical route is lacking. Here, we adopt a computational approach in order to investigate whether the visual representation that is achieved in the LGN may support emotion recognition and emotional response along the subcortical route. In four experiments, we show that the outputs of LGN Y-cells support neither facial expression categorization nor the same/different expression matching by an artificial classificator. However, the same classificator is able to perform at an above chance level in a statistics-based categorization of scenes containing animals and scenes containing people and of light and dark patterns. It is concluded that the visual representation achieved in the LGN is insufficient to allow for the recognition of emotional facial expression.

  17. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    Directory of Open Access Journals (Sweden)

    Zhuowen Lv

    2015-01-01

    Full Text Available Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach.

  18. Application of image recognition algorithms for statistical description of nano- and microstructured surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Mărăscu, V.; Dinescu, G. [National Institute for Lasers, Plasma and Radiation Physics, 409 Atomistilor Street, Bucharest– Magurele (Romania); Faculty of Physics, University of Bucharest, 405 Atomistilor Street, Bucharest-Magurele (Romania); Chiţescu, I. [Faculty of Mathematics and Computer Science, University of Bucharest, 14 Academiei Street, Bucharest (Romania); Barna, V. [Faculty of Physics, University of Bucharest, 405 Atomistilor Street, Bucharest-Magurele (Romania); Ioniţă, M. D.; Lazea-Stoyanova, A.; Mitu, B., E-mail: mitub@infim.ro [National Institute for Lasers, Plasma and Radiation Physics, 409 Atomistilor Street, Bucharest– Magurele (Romania)

    2016-03-25

    In this paper we propose a statistical approach for describing the self-assembling of sub-micronic polystyrene beads on silicon surfaces, as well as the evolution of surface topography due to plasma treatments. Algorithms for image recognition are used in conjunction with Scanning Electron Microscopy (SEM) imaging of surfaces. In a first step, greyscale images of the surface covered by the polystyrene beads are obtained. Further, an adaptive thresholding method was applied for obtaining binary images. The next step consisted in automatic identification of polystyrene beads dimensions, by using Hough transform algorithm, according to beads radius. In order to analyze the uniformity of the self–assembled polystyrene beads, the squared modulus of 2-dimensional Fast Fourier Transform (2- D FFT) was applied. By combining these algorithms we obtain a powerful and fast statistical tool for analysis of micro and nanomaterials with aspect features regularly distributed on surface upon SEM examination.

  19. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    Science.gov (United States)

    Lv, Zhuowen; Xing, Xianglei; Wang, Kejun; Guan, Donghai

    2015-01-01

    Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach. PMID:25574935

  20. Metasurface optics for full-color computational imaging.

    Science.gov (United States)

    Colburn, Shane; Zhan, Alan; Majumdar, Arka

    2018-02-01

    Conventional imaging systems comprise large and expensive optical components that successively mitigate aberrations. Metasurface optics offers a route to miniaturize imaging systems by replacing bulky components with flat and compact implementations. The diffractive nature of these devices, however, induces severe chromatic aberrations, and current multiwavelength and narrowband achromatic metasurfaces cannot support full visible spectrum imaging (400 to 700 nm). We combine principles of both computational imaging and metasurface optics to build a system with a single metalens of numerical aperture ~0.45, which generates in-focus images under white light illumination. Our metalens exhibits a spectrally invariant point spread function that enables computational reconstruction of captured images with a single digital filter. This work connects computational imaging and metasurface optics and demonstrates the capabilities of combining these disciplines by simultaneously reducing aberrations and downsizing imaging systems using simpler optics.

  1. Automatic solar feature detection using image processing and pattern recognition techniques

    Science.gov (United States)

    Qu, Ming

    The objective of the research in this dissertation is to develop a software system to automatically detect and characterize solar flares, filaments and Corona Mass Ejections (CMEs), the core of so-called solar activity. These tools will assist us to predict space weather caused by violent solar activity. Image processing and pattern recognition techniques are applied to this system. For automatic flare detection, the advanced pattern recognition techniques such as Multi-Layer Perceptron (MLP), Radial Basis Function (RBF), and Support Vector Machine (SVM) are used. By tracking the entire process of flares, the motion properties of two-ribbon flares are derived automatically. In the applications of the solar filament detection, the Stabilized Inverse Diffusion Equation (SIDE) is used to enhance and sharpen filaments; a new method for automatic threshold selection is proposed to extract filaments from background; an SVM classifier with nine input features is used to differentiate between sunspots and filaments. Once a filament is identified, morphological thinning, pruning, and adaptive edge linking methods are applied to determine filament properties. Furthermore, a filament matching method is proposed to detect filament disappearance. The automatic detection and characterization of flares and filaments have been successfully applied on Halpha full-disk images that are continuously obtained at Big Bear Solar Observatory (BBSO). For automatically detecting and classifying CMEs, the image enhancement, segmentation, and pattern recognition techniques are applied to Large Angle Spectrometric Coronagraph (LASCO) C2 and C3 images. The processed LASCO and BBSO images are saved to file archive, and the physical properties of detected solar features such as intensity and speed are recorded in our database. Researchers are able to access the solar feature database and analyze the solar data efficiently and effectively. The detection and characterization system greatly improves

  2. Facial Expression Recognition

    NARCIS (Netherlands)

    Pantic, Maja; Li, S.; Jain, A.

    2009-01-01

    Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial

  3. An alternative to scale-space representation for extracting local features in image recognition

    DEFF Research Database (Denmark)

    Andersen, Hans Jørgen; Nguyen, Phuong Giang

    2012-01-01

    and compensation, and finally a descriptor is computed for the derived patch (i.e. feature of the patch). To avoid the memory and computational intensive process of constructing the scale-space, we use a method where no scale-space is required This is done by dividing the given image into a number of triangles...

  4. Intelligent and interactive computer image of a nuclear power plant: The ImagIn project

    International Nuclear Information System (INIS)

    Haubensack, D.; Malvache, P.; Valleix, P.

    1998-01-01

    The ImagIn project consists in a method and a set of computer tools apt to bring perceptible and assessable improvements in the operational safety of a nuclear plant. Its aim is to design an information system that would maintain a highly detailed computerized representation of a nuclear plant in its initial state and throughout its in-service life. It is not a tool to drive or help driving the nuclear plant, but a tool that manages concurrent operations that modify the plant configuration in a very general was (maintenance for example). The configuration of the plant, as well as rules and constraints about it, are described in a object-oriented knowledge database, which is built using a generic ImagIn meta-model based on the semantical network theory. An inference engine works on this database and is connected to reality through interfaces to operators and captors on the installation; it verifies constantly in real-time the consistency of the database according to its inner rules, and reports eventual problems to concerned operators. A special effort is made on interfaces to provide natural and intuitive tools (using virtual reality, natural language, voice recognition and synthesis). A laboratory application on a fictive but realistic installation already exists and is used to simulate various tests and scenarii. A real application is being constructed on Siloe, an experimental reactor of the CEA. (author)

  5. Hepatocellular Carcinoma Post Embolotherapy: Imaging Appearances and Pitfalls on Computed Tomography and Magnetic Resonance Imaging.

    Science.gov (United States)

    Chiu, Rita Y W; Yap, Wan W; Patel, Roshni; Liu, David; Klass, Darren; Harris, Alison C

    2016-05-01

    Embolotherapies used in the treatment of hepatocellular carcinoma (HCC) include bland embolization, conventional transarterial chemoembolization (cTACE) using ethiodol as a carrier, TACE with drug-eluting beads and super absorbent polymer microspheres (DEB-TACE), and selective internal radiation therapy (SIRT). Successfully treated HCC lesions undergo coagulation necrosis, and appear as nonenhancing hypoattenuating or hypointense lesions in the embolized region on computed tomography (CT) and magnetic resonance. Residual or recurrent tumours demonstrate arterial enhancement with portal venous phase wash-out of contrast, features characteristic of HCC, in and/or around the embolized area. Certain imaging features that result from the procedure itself may limit assessment of response. In conventional TACE, the high-attenuating retained ethiodized oil may obscure arterially-enhancing tumours and limit detection of residual tumours; thus a noncontrast CT on follow-up imaging is important post-cTACE. Hyperenhancement within or around the treated zone can be seen after cTACE, DEB-TACE, or SIRT due to physiologic inflammatory response and may mimic residual tumour. Recognition of these pitfalls is important in the evaluation embolotherapy response. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  6. Pattern recognition

    CERN Document Server

    Theodoridis, Sergios

    2003-01-01

    Pattern recognition is a scientific discipline that is becoming increasingly important in the age of automation and information handling and retrieval. Patter Recognition, 2e covers the entire spectrum of pattern recognition applications, from image analysis to speech recognition and communications. This book presents cutting-edge material on neural networks, - a set of linked microprocessors that can form associations and uses pattern recognition to ""learn"" -and enhances student motivation by approaching pattern recognition from the designer's point of view. A direct result of more than 10

  7. Image analysis and modeling in medical image computing. Recent developments and advances.

    Science.gov (United States)

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body

  8. Comparison of Image reformation Using Personal Computer with Dentascan Program

    International Nuclear Information System (INIS)

    Kim, Eun Kyung

    1997-01-01

    This study was performed to demonstrate the method of image reformation of dental implants, using a personal computer with inexpensive software and to compare the images reformatted using the above method with those using Dentascan software. CT axial slices of 5 mandibles of 5 volunteers from GE Highspeed Advantage (GE Medical systems, U.S.A.) were used, Personal computer used for image reformation was PowerWave 604/120 (Power Computing Co, U.S.A.) and software used were Osiris (Univ. Hospital Of Geneva, Switzerland) and Import ACCESS V1.1(Designed Access Co., U.S.A.) for importing CT image and NIH Image 1.58 (NIH, U.S.A.) for image processing. Seven images were selected among the serial reconstructed cross-sectional images produced by Dentascan (DS group). Seven resliced cross-sectional images at the same position were obtained ned at the personal computer (PC group). Regression analysis of the measurements of PC group was done against those of DS group. Measurements of the bone height and width at the reformed cross-sectional images using Mac-compatible computer were highly correlated with those using workstation with Dentascan software (height : r2=0.999, p<0.001, width : p=0.991, p<0.001). So, it is considered that we can use a personal computer with inexpensive software for the dental implant planning, instead of the expensive software and workstation.

  9. Multimedia Image Technology and Computer Aided Manufacturing Engineering Analysis

    Science.gov (United States)

    Nan, Song

    2018-03-01

    Since the reform and opening up, with the continuous development of science and technology in China, more and more advanced science and technology have emerged under the trend of diversification. Multimedia imaging technology, for example, has a significant and positive impact on computer aided manufacturing engineering in China. From the perspective of scientific and technological advancement and development, the multimedia image technology has a very positive influence on the application and development of computer-aided manufacturing engineering, whether in function or function play. Therefore, this paper mainly starts from the concept of multimedia image technology to analyze the application of multimedia image technology in computer aided manufacturing engineering.

  10. Food Image Recognition via Superpixel Based Low-Level and Mid-Level Distance Coding for Smart Home Applications

    OpenAIRE

    Jiannan Zheng; Z. Jane Wang; Chunsheng Zhu

    2017-01-01

    Food image recognition is a key enabler for many smart home applications such as smart kitchen and smart personal nutrition log. In order to improve living experience and life quality, smart home systems collect valuable insights of users’ preferences, nutrition intake and health conditions via accurate and robust food image recognition. In addition, efficiency is also a major concern since many smart home applications are deployed on mobile devices where high-end GPUs are not available. In t...

  11. Image Visual Realism: From Human Perception to Machine Computation.

    Science.gov (United States)

    Fan, Shaojing; Ng, Tian-Tsong; Koenig, Bryan L; Herberg, Jonathan S; Jiang, Ming; Shen, Zhiqi; Zhao, Qi

    2017-08-30

    Visual realism is defined as the extent to which an image appears to people as a photo rather than computer generated. Assessing visual realism is important in applications like computer graphics rendering and photo retouching. However, current realism evaluation approaches use either labor-intensive human judgments or automated algorithms largely dependent on comparing renderings to reference images. We develop a reference-free computational framework for visual realism prediction to overcome these constraints. First, we construct a benchmark dataset of 2520 images with comprehensive human annotated attributes. From statistical modeling on this data, we identify image attributes most relevant for visual realism. We propose both empirically-based (guided by our statistical modeling of human data) and CNN-learned features to predict visual realism of images. Our framework has the following advantages: (1) it creates an interpretable and concise empirical model that characterizes human perception of visual realism; (2) it links computational features to latent factors of human image perception.

  12. Visual recognition of mirror, video-recorded, and still images in rats.

    Science.gov (United States)

    Yakura, Tomiko; Yokota, Hiroki; Ohmichi, Yusuke; Ohmichi, Mika; Nakano, Takashi; Naito, Munekazu

    2018-01-01

    Several recent studies have claimed that rodents have good visual recognition abilities. However, the extent to which rats can recognize other rats and distinguish between males and females using visual information alone remains unclear. In the present study, we investigated the ability of rats to visually recognize mirror, video-recorded, and still images and to discriminate between images of males and females. Rats were tested in a place preference apparatus with a mirror, a video-recorded image of a rat, or a still image of a rat at one end. The data were assessed using t-test with Bonferroni correction. Male and female rats spent significantly more time in the mirror chamber and the video-recorded image chamber than in their respective blank chambers (P mirror than in the blank chamber (P mirror, moving or still image experiments. Identical results were obtained regardless of whether the rat in the image was the same or opposite sex. These results indicate that rats can process the differences in mirror, video-recorded, and still images as visual information, but are unable to use this information to distinguish between the sexes.

  13. Pattern recognition applied to infrared images for early alerts in fog

    Science.gov (United States)

    Boucher, Vincent; Marchetti, Mario; Dumoulin, Jean; Cord, Aurélien

    2014-09-01

    Fog conditions are the cause of severe car accidents in western countries because of the poor induced visibility. Its forecast and intensity are still very difficult to predict by weather services. Infrared cameras allow to detect and to identify objects in fog while visibility is too low for eye detection. Over the past years, the implementation of cost effective infrared cameras on some vehicles has enabled such detection. On the other hand pattern recognition algorithms based on Canny filters and Hough transformation are a common tool applied to images. Based on these facts, a joint research program between IFSTTAR and Cerema has been developed to study the benefit of infrared images obtained in a fog tunnel during its natural dissipation. Pattern recognition algorithms have been applied, specifically on road signs which shape is usually associated to a specific meaning (circular for a speed limit, triangle for an alert, …). It has been shown that road signs were detected early enough in images, with respect to images in the visible spectrum, to trigger useful alerts for Advanced Driver Assistance Systems.

  14. Statistical-techniques-based computer-aided diagnosis (CAD) using texture feature analysis: application in computed tomography (CT) imaging to fatty liver disease

    Science.gov (United States)

    Chung, Woon-Kwan; Park, Hyong-Hu; Im, In-Chul; Lee, Jae-Seung; Goo, Eun-Hoe; Dong, Kyung-Rae

    2012-09-01

    This paper proposes a computer-aided diagnosis (CAD) system based on texture feature analysis and statistical wavelet transformation technology to diagnose fatty liver disease with computed tomography (CT) imaging. In the target image, a wavelet transformation was performed for each lesion area to set the region of analysis (ROA, window size: 50 × 50 pixels) and define the texture feature of a pixel. Based on the extracted texture feature values, six parameters (average gray level, average contrast, relative smoothness, skewness, uniformity, and entropy) were determined to calculate the recognition rate for a fatty liver. In addition, a multivariate analysis of the variance (MANOVA) method was used to perform a discriminant analysis to verify the significance of the extracted texture feature values and the recognition rate for a fatty liver. According to the results, each texture feature value was significant for a comparison of the recognition rate for a fatty liver ( p fatty liver had the same scale as that for the F-value, showing 100% (average gray level) at the maximum and 80% (average contrast) at the minimum. Therefore, the recognition rate is believed to be a useful clinical value for the automatic detection and computer-aided diagnosis (CAD) using the texture feature value. Nevertheless, further study on various diseases and singular diseases will be needed in the future.

  15. Medical imaging technology reviews and computational applications

    CERN Document Server

    Dewi, Dyah

    2015-01-01

    This book presents the latest research findings and reviews in the field of medical imaging technology, covering ultrasound diagnostics approaches for detecting osteoarthritis, breast carcinoma and cardiovascular conditions, image guided biopsy and segmentation techniques for detecting lung cancer, image fusion, and simulating fluid flows for cardiovascular applications. It offers a useful guide for students, lecturers and professional researchers in the fields of biomedical engineering and image processing.

  16. Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks.

    Science.gov (United States)

    Yu, Lequan; Chen, Hao; Dou, Qi; Qin, Jing; Heng, Pheng-Ann

    2017-04-01

    Automated melanoma recognition in dermoscopy images is a very challenging task due to the low contrast of skin lesions, the huge intraclass variation of melanomas, the high degree of visual similarity between melanoma and non-melanoma lesions, and the existence of many artifacts in the image. In order to meet these challenges, we propose a novel method for melanoma recognition by leveraging very deep convolutional neural networks (CNNs). Compared with existing methods employing either low-level hand-crafted features or CNNs with shallower architectures, our substantially deeper networks (more than 50 layers) can acquire richer and more discriminative features for more accurate recognition. To take full advantage of very deep networks, we propose a set of schemes to ensure effective training and learning under limited training data. First, we apply the residual learning to cope with the degradation and overfitting problems when a network goes deeper. This technique can ensure that our networks benefit from the performance gains achieved by increasing network depth. Then, we construct a fully convolutional residual network (FCRN) for accurate skin lesion segmentation, and further enhance its capability by incorporating a multi-scale contextual information integration scheme. Finally, we seamlessly integrate the proposed FCRN (for segmentation) and other very deep residual networks (for classification) to form a two-stage framework. This framework enables the classification network to extract more representative and specific features based on segmented results instead of the whole dermoscopy images, further alleviating the insufficiency of training data. The proposed framework is extensively evaluated on ISBI 2016 Skin Lesion Analysis Towards Melanoma Detection Challenge dataset. Experimental results demonstrate the significant performance gains of the proposed framework, ranking the first in classification and the second in segmentation among 25 teams and 28 teams

  17. Recognition of Time Stamps on Full-Disk Hα Images Using Machine Learning Methods

    Science.gov (United States)

    Xu, Y.; Huang, N.; Jing, J.; Liu, C.; Wang, H.; Fu, G.

    2016-12-01

    Observation and understanding of the physics of the 11-year solar activity cycle and 22-year magnetic cycle are among the most important research topics in solar physics. The solar cycle is responsible for magnetic field and particle fluctuation in the near-earth environment that have been found increasingly important in affecting the living of human beings in the modern era. A systematic study of large-scale solar activities, as made possible by our rich data archive, will further help us to understand the global-scale magnetic fields that are closely related to solar cycles. The long-time-span data archive includes both full-disk and high-resolution Hα images. Prior to the widely use of CCD cameras in 1990s, 35-mm films were the major media to store images. The research group at NJIT recently finished the digitization of film data obtained by the National Solar Observatory (NSO) and Big Bear Solar Observatory (BBSO) covering the period of 1953 to 2000. The total volume of data exceeds 60 TB. To make this huge database scientific valuable, some processing and calibration are required. One of the most important steps is to read the time stamps on all of the 14 million images, which is almost impossible to be done manually. We implemented three different methods to recognize the time stamps automatically, including Optical Character Recognition (OCR), Classification Tree and TensorFlow. The latter two are known as machine learning algorithms which are very popular now a day in pattern recognition area. We will present some sample images and the results of clock recognition from all three methods.

  18. Rotation-robust math symbol recognition and retrieval using outer contours and image subsampling

    Science.gov (United States)

    Zhu, Siyu; Hu, Lei; Zanibbi, Richard

    2013-01-01

    This paper presents an unified recognition and retrieval system for isolated offline printed mathematical symbols for the first time. The system is based on nearest neighbor scheme and uses modified Turning Function and Grid Features to calculate the distance between two symbols based on Sum of Squared Difference. An unwrap process and an alignment process are applied to modify Turning Function to deal with the horizontal and vertical shift caused by the changing of staring point and rotation. This modified Turning Function make our system robust against rotation of the symbol image. The system obtains top-1 recognition rate of 96.90% and 47.27% Area Under Curve (AUC) of precision/recall plot on the InftyCDB-3 dataset. Experiment result shows that the system with modified Turning Function performs significantly better than the system with original Turning Function on the rotated InftyCDB-3 dataset.

  19. A multimodal imaging study of recognition memory in very preterm born adults

    Science.gov (United States)

    Froudist‐Walsh, Seán; Brittain, Philip J.; Karolis, Vyacheslav; Caldinelli, Chiara; Kroll, Jasmin; Counsell, Serena J.; Williams, Steven C.R.; Murray, Robin M.; Nosarti, Chiara

    2016-01-01

    Abstract Very preterm (recognition memory in 49 very preterm‐born adults and 50 controls (mean age: 30 years) during completion of a task involving visual encoding and recognition of abstract pictures. T1‐weighted and diffusion‐weighted images were also collected. Bilateral hippocampal volumes were calculated and tractography of the fornix and cingulum was performed and assessed in terms of volume and hindrance modulated orientational anisotropy (HMOA). Online recognition memory task performance, assessed with A scores, was poorer in the very preterm compared with the control group. Analysis of fMRI data focused on differences in neural activity between the recognition and encoding trials. Very preterm born adults showed decreased activation in the right middle frontal gyrus and posterior cingulate cortex/precuneus and increased activation in the left inferior frontal gyrus and bilateral lateral occipital cortex (LOC) compared with controls. Hippocampi, fornix and cingulum volume was significantly smaller and fornix HMOA was lower in very preterm adults. Among all the structural and functional brain metrics that showed statistically significant group differences, LOC activation was the best predictor of online task performance (P = 0.020). In terms of association between brain function and structure, LOC activation was predicted by fornix HMOA in the preterm group only (P = 0.020). These results suggest that neuroanatomical alterations in very preterm born individuals may be underlying their poorer recognition memory performance. Hum Brain Mapp 38:644–655, 2017. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. PMID:27647705

  20. Recognition of Mould Colony on Unhulled Paddy Based on Computer Vision using Conventional Machine-learning and Deep Learning Techniques.

    Science.gov (United States)

    Sun, Ke; Wang, Zhengjie; Tu, Kang; Wang, Shaojin; Pan, Leiqing

    2016-11-29

    To investigate the potential of conventional and deep learning techniques to recognize the species and distribution of mould in unhulled paddy, samples were inoculated and cultivated with five species of mould, and sample images were captured. The mould recognition methods were built using support vector machine (SVM), back-propagation neural network (BPNN), convolutional neural network (CNN), and deep belief network (DBN) models. An accuracy rate of 100% was achieved by using the DBN model to identify the mould species in the sample images based on selected colour-histogram parameters, followed by the SVM and BPNN models. A pitch segmentation recognition method combined with different classification models was developed to recognize the mould colony areas in the image. The accuracy rates of the SVM and CNN models for pitch classification were approximately 90% and were higher than those of the BPNN and DBN models. The CNN and DBN models showed quicker calculation speeds for recognizing all of the pitches segmented from a single sample image. Finally, an efficient uniform CNN pitch classification model for all five types of sample images was built. This work compares multiple classification models and provides feasible recognition methods for mouldy unhulled paddy recognition.

  1. Recognition of Mould Colony on Unhulled Paddy Based on Computer Vision using Conventional Machine-learning and Deep Learning Techniques

    Science.gov (United States)

    Sun, Ke; Wang, Zhengjie; Tu, Kang; Wang, Shaojin; Pan, Leiqing

    2016-11-01

    To investigate the potential of conventional and deep learning techniques to recognize the species and distribution of mould in unhulled paddy, samples were inoculated and cultivated with five species of mould, and sample images were captured. The mould recognition methods were built using support vector machine (SVM), back-propagation neural network (BPNN), convolutional neural network (CNN), and deep belief network (DBN) models. An accuracy rate of 100% was achieved by using the DBN model to identify the mould species in the sample images based on selected colour-histogram parameters, followed by the SVM and BPNN models. A pitch segmentation recognition method combined with different classification models was developed to recognize the mould colony areas in the image. The accuracy rates of the SVM and CNN models for pitch classification were approximately 90% and were higher than those of the BPNN and DBN models. The CNN and DBN models showed quicker calculation speeds for recognizing all of the pitches segmented from a single sample image. Finally, an efficient uniform CNN pitch classification model for all five types of sample images was built. This work compares multiple classification models and provides feasible recognition methods for mouldy unhulled paddy recognition.

  2. Learning and recognition of on-premise signs from weakly labeled street view images.

    Science.gov (United States)

    Tsai, Tsung-Hung; Cheng, Wen-Huang; You, Chuang-Wen; Hu, Min-Chun; Tsui, Arvin Wen; Chi, Heng-Yu

    2014-03-01

    Camera-enabled mobile devices are commonly used as interaction platforms for linking the user's virtual and physical worlds in numerous research and commercial applications, such as serving an augmented reality interface for mobile information retrieval. The various application scenarios give rise to a key technique of daily life visual object recognition. On-premise signs (OPSs), a popular form of commercial advertising, are widely used in our living life. The OPSs often exhibit great visual diversity (e.g., appearing in arbitrary size), accompanied with complex environmental conditions (e.g., foreground and background clutter). Observing that such real-world characteristics are lacking in most of the existing image data sets, in this paper, we first proposed an OPS data set, namely OPS-62, in which totally 4649 OPS images of 62 different businesses are collected from Google's Street View. Further, for addressing the problem of real-world OPS learning and recognition, we developed a probabilistic framework based on the distributional clustering, in which we proposed to exploit the distributional information of each visual feature (the distribution of its associated OPS labels) as a reliable selection criterion for building discriminative OPS models. Experiments on the OPS-62 data set demonstrated the outperformance of our approach over the state-of-the-art probabilistic latent semantic analysis models for more accurate recognitions and less false alarms, with a significant 151.28% relative improvement in the average recognition rate. Meanwhile, our approach is simple, linear, and can be executed in a parallel fashion, making it practical and scalable for large-scale multimedia applications.

  3. Computational anatomy based on whole body imaging basic principles of computer-assisted diagnosis and therapy

    CERN Document Server

    Masutani, Yoshitaka

    2017-01-01

    This book deals with computational anatomy, an emerging discipline recognized in medical science as a derivative of conventional anatomy. It is also a completely new research area on the boundaries of several sciences and technologies, such as medical imaging, computer vision, and applied mathematics. Computational Anatomy Based on Whole Body Imaging highlights the underlying principles, basic theories, and fundamental techniques in computational anatomy, which are derived from conventional anatomy, medical imaging, computer vision, and applied mathematics, in addition to various examples of applications in clinical data. The book will cover topics on the basics and applications of the new discipline. Drawing from areas in multidisciplinary fields, it provides comprehensive, integrated coverage of innovative approaches to computational anatomy. As well,Computational Anatomy Based on Whole Body Imaging serves as a valuable resource for researchers including graduate students in the field and a connection with ...

  4. Facial recognition software success rates for the identification of 3D surface reconstructed facial images: implications for patient privacy and security.

    Science.gov (United States)

    Mazura, Jan C; Juluru, Krishna; Chen, Joseph J; Morgan, Tara A; John, Majnu; Siegel, Eliot L

    2012-06-01

    Image de-identification has focused on the removal of textual protected health information (PHI). Surface reconstructions of the face have the potential to reveal a subject's identity even when textual PHI is absent. This study assessed the ability of a computer application to match research subjects' 3D facial reconstructions with conventional photographs of their face. In a prospective study, 29 subjects underwent CT scans of the head and had frontal digital photographs of their face taken. Facial reconstructions of each CT dataset were generated on a 3D workstation. In phase 1, photographs of the 29 subjects undergoing CT scans were added to a digital directory and tested for recognition using facial recognition software. In phases 2-4, additional photographs were added in groups of 50 to increase the pool of possible matches and the test for recognition was repeated. As an internal control, photographs of all subjects were tested for recognition against an identical photograph. Of 3D reconstructions, 27.5% were matched correctly to corresponding photographs (95% upper CL, 40.1%). All study subject photographs were matched correctly to identical photographs (95% lower CL, 88.6%). Of 3D reconstructions, 96.6% were recognized simply as a face by the software (95% lower CL, 83.5%). Facial recognition software has the potential to recognize features on 3D CT surface reconstructions and match these with photographs, with implications for PHI.

  5. An integrated compact airborne multispectral imaging system using embedded computer

    Science.gov (United States)

    Zhang, Yuedong; Wang, Li; Zhang, Xuguo

    2015-08-01

    An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.

  6. Iris image recognition wavelet filter-banks based iris feature extraction schemes

    CERN Document Server

    Rahulkar, Amol D

    2014-01-01

    This book provides the new results in wavelet filter banks based feature extraction, and the classifier in the field of iris image recognition. It provides the broad treatment on the design of separable, non-separable wavelets filter banks, and the classifier. The design techniques presented in the book are applied on iris image analysis for person authentication. This book also brings together the three strands of research (wavelets, iris image analysis, and classifier). It compares the performance of the presented techniques with state-of-the-art available schemes. This book contains the compilation of basic material on the design of wavelets that avoids reading many different books. Therefore, it provide an easier path for the new-comers, researchers to master the contents. In addition, the designed filter banks and classifier can also be effectively used than existing filter-banks in many signal processing applications like pattern classification, data-compression, watermarking, denoising etc.  that will...

  7. An Underwater Image Enhancement Algorithm for Environment Recognition and Robot Navigation

    Directory of Open Access Journals (Sweden)

    Kun Xie

    2018-03-01

    Full Text Available There are many tasks that require clear and easily recognizable images in the field of underwater robotics and marine science, such as underwater target detection and identification of robot navigation and obstacle avoidance. However, water turbidity makes the underwater image quality too low to recognize. This paper proposes the use of the dark channel prior model for underwater environment recognition, in which underwater reflection models are used to obtain enhanced images. The proposed approach achieves very good performance and multi-scene robustness by combining the dark channel prior model with the underwater diffuse model. The experimental results are given to show the effectiveness of the dark channel prior model in underwater scenarios.

  8. Statistical-techniques-based computer-aided diagnosis (CAD) using texture feature analysis: application in computed tomography (CT) imaging to fatty liver disease

    International Nuclear Information System (INIS)

    Chung, Woon-Kwan; Park, Hyong-Hu; Im, In-Chul; Lee, Jae-Seung; Goo, Eun-Hoe; Dong, Kyung-Rae

    2012-01-01

    This paper proposes a computer-aided diagnosis (CAD) system based on texture feature analysis and statistical wavelet transformation technology to diagnose fatty liver disease with computed tomography (CT) imaging. In the target image, a wavelet transformation was performed for each lesion area to set the region of analysis (ROA, window size: 50 x 50 pixels) and define the texture feature of a pixel. Based on the extracted texture feature values, six parameters (average gray level, average contrast, relative smoothness, skewness, uniformity, and entropy) were determined to calculate the recognition rate for a fatty liver. In addition, a multivariate analysis of the variance (MANOVA) method was used to perform a discriminant analysis to verify the significance of the extracted texture feature values and the recognition rate for a fatty liver. According to the results, each texture feature value was significant for a comparison of the recognition rate for a fatty liver (p < 0.05). Furthermore, the F-value, which was used as a scale for the difference in recognition rates, was highest in the average gray level, relatively high in the skewness and the entropy, and relatively low in the uniformity, the relative smoothness and the average contrast. The recognition rate for a fatty liver had the same scale as that for the F-value, showing 100% (average gray level) at the maximum and 80% (average contrast) at the minimum. Therefore, the recognition rate is believed to be a useful clinical value for the automatic detection and computer-aided diagnosis (CAD) using the texture feature value. Nevertheless, further study on various diseases and singular diseases will be needed in the future.

  9. Neuroscience-inspired computational systems for speech recognition under noisy conditions

    Science.gov (United States)

    Schafer, Phillip B.

    Humans routinely recognize speech in challenging acoustic environments with background music, engine sounds, competing talkers, and other acoustic noise. However, today's automatic speech recognition (ASR) systems perform poorly in such environments. In this dissertation, I present novel methods for ASR designed to approach human-level performance by emulating the brain's processing of sounds. I exploit recent advances in auditory neuroscience to compute neuron-based representations of speech, and design novel methods for decoding these representations to produce word transcriptions. I begin by considering speech representations modeled on the spectrotemporal receptive fields of auditory neurons. These representations can be tuned to optimize a variety of objective functions, which characterize the response properties of a neural population. I propose an objective function that explicitly optimizes the noise invariance of the neural responses, and find that it gives improved performance on an ASR task in noise compared to other objectives. The method as a whole, however, fails to significantly close the performance gap with humans. I next consider speech representations that make use of spiking model neurons. The neurons in this method are feature detectors that selectively respond to spectrotemporal patterns within short time windows in speech. I consider a number of methods for training the response properties of the neurons. In particular, I present a method using linear support vector machines (SVMs) and show that this method produces spikes that are robust to additive noise. I compute the spectrotemporal receptive fields of the neurons for comparison with previous physiological results. To decode the spike-based speech representations, I propose two methods designed to work on isolated word recordings. The first method uses a classical ASR technique based on the hidden Markov model. The second method is a novel template-based recognition scheme that takes

  10. A dynamic image recognition method for sleeper springs trouble of moving freight cars based on Haar features

    Science.gov (United States)

    Zhou, Fuqiang; Jiang, Yuan; Zhang, Guangjun

    2006-11-01

    A novel conception of automatic recognition for free-trouble sleeper springs is proposed and Adaboost algorithm based on Haar features is applied for the sleeper springs recognition in Trouble of moving Freight car Detection System (TFDS). In the recognition system, feature set of sleeper springs is determined by Haar features and selected by Adaboost algorithm. In order to recognize and select the free-trouble sleeper springs from all the captured dynamic images, a cascade of classifier is established by searching dynamic images. The amount of detected images is drastically reduced and the recognition efficiency is improved due to the conception of free-trouble recognition. Experiments show that the proposed method is characterized by simple feature, high efficiency and robustness. It performs high robustness against noise as well as translation, rotation and scale transformations of objects and indicates high stability to images with poor quality such as low resolution, partial occlusion, poor illumination and overexposure etc. The recognition time of a 640×480 image is about 16ms, and Correct Detection Rate is high up to about 97%, while Miss Detection Rate and Error Detection Rate are very low. The proposed method can recognize sleeper springs in all-weather conditions, which advances the engineering application for TFDS.

  11. Rapid and accurate developmental stage recognition of C. elegans from high-throughput image data

    Science.gov (United States)

    White, Amelia G.; Cipriani, Patricia G.; Kao, Huey-Ling; Lees, Brandon; Geiger, Davi; Sontag, Eduardo; Gunsalus, Kristin C.; Piano, Fabio

    2011-01-01

    We present a hierarchical principle for object recognition and its application to automatically classify developmental stages of C. elegans animals from a population of mixed stages. The object recognition machine consists of four hierarchical layers, each composed of units upon which evaluation functions output a label score, followed by a grouping mechanism that resolves ambiguities in the score by imposing local consistency constraints. Each layer then outputs groups of units, from which the units of the next layer are derived. Using this hierarchical principle, the machine builds up successively more sophisticated representations of the objects to be classified. The algorithm segments large and small objects, decomposes objects into parts, extracts features from these parts, and classifies them by SVM. We are using this system to analyze phenotypic data from C. elegans high-throughput genetic screens, and our system overcomes a previous bottleneck in image analysis by achieving near real-time scoring of image data. The system is in current use in a functioning C. elegans laboratory and has processed over two hundred thousand images for lab users. PMID:22053146

  12. Computer-Mediated Input, Output and Feedback in the Development of L2 Word Recognition from Speech

    Science.gov (United States)

    Matthews, Joshua; Cheng, Junyu; O'Toole, John Mitchell

    2015-01-01

    This paper reports on the impact of computer-mediated input, output and feedback on the development of second language (L2) word recognition from speech (WRS). A quasi-experimental pre-test/treatment/post-test research design was used involving three intact tertiary level English as a Second Language (ESL) classes. Classes were either assigned to…

  13. Computer-Assisted Face Processing Instruction Improves Emotion Recognition, Mentalizing, and Social Skills in Students with ASD

    Science.gov (United States)

    Rice, Linda Marie; Wall, Carla Anne; Fogel, Adam; Shic, Frederick

    2015-01-01

    This study examined the extent to which a computer-based social skills intervention called "FaceSay"™ was associated with improvements in affect recognition, mentalizing, and social skills of school-aged children with Autism Spectrum Disorder (ASD). "FaceSay"™ offers students simulated practice with eye gaze, joint attention,…

  14. Automatic anatomy recognition in post-tonsillectomy MR images of obese children with OSAS

    Science.gov (United States)

    Tong, Yubing; Udupa, Jayaram K.; Odhner, Dewey; Sin, Sanghun; Arens, Raanan

    2015-03-01

    Automatic Anatomy Recognition (AAR) is a recently developed approach for the automatic whole body wide organ segmentation. We previously tested that methodology on image cases with some pathology where the organs were not distorted significantly. In this paper, we present an advancement of AAR to handle organs which may have been modified or resected by surgical intervention. We focus on MRI of the neck in pediatric Obstructive Sleep Apnea Syndrome (OSAS). The proposed method consists of an AAR step followed by support vector machine techniques to detect the presence/absence of organs. The AAR step employs a hierarchical organization of the organs for model building. For each organ, a fuzzy model over a population is built. The model of the body region is then described in terms of the fuzzy models and a host of other descriptors which include parent to offspring relationship estimated over the population. Organs are recognized following the organ hierarchy by using an optimal threshold based search. The SVM step subsequently checks for evidence of the presence of organs. Experimental results show that AAR techniques can be combined with machine learning strategies within the AAR recognition framework for good performance in recognizing missing organs, in our case missing tonsils in post-tonsillectomy images as well as in simulating tonsillectomy images. The previous recognition performance is maintained achieving an organ localization accuracy of within 1 voxel when the organ is actually not removed. To our knowledge, no methods have been reported to date for handling significantly deformed or missing organs, especially in neck MRI.

  15. Correction for polychromatic aberration in computed tomography images

    International Nuclear Information System (INIS)

    Naparstek, A.

    1979-01-01

    A method and apparatus for correcting a computed tomography image for polychromatic aberration caused by the non-linear interaction (i.e. the energy dependent attenuation characteristics) of different body constituents, such as bone and soft tissue, with a polychromatic X-ray beam are described in detail. An initial image is conventionally computed from path measurements made as source and detector assembly scan a body section. In the improvement, each image element of the initial computed image representing attenuation is recorded in a store and is compared with two thresholds, one representing bone and the other soft tissue. Depending on the element value relative to the thresholds, a proportion of the respective constituent is allocated to that element location and corresponding bone and soft tissue projections are determined and stored. An error projection generator calculates projections of polychromatic aberration errors in the raw image data from recalled bone and tissue projections using a multidimensional polynomial function which approximates the non-linear interaction involved. After filtering, these are supplied to an image reconstruction computer to compute image element correction values which are subtracted from raw image element values to provide a corrected reconstructed image for display. (author)

  16. A machine for neural computation of acoustical patterns with application to real time speech recognition

    Science.gov (United States)

    Mueller, P.; Lazzaro, J.

    1986-08-01

    400 analog electronic neurons have been assembled and connected for the analysis and recognition of acoustical patterns, including speech. Input to the net comes from a set of 18 band pass filters (Qmax 300 dB/octave; 180 to 6000 Hz, log scale). The net is organized into two parts, the first performs in real time the decomposition of the input patterns into their primitives of energy, space (frequency) and time relations. The other part decodes the set of primitives. 216 neurons are dedicated to pattern decomposition. The output of the individual filters is rectified and fed to two sets of 18 neurons in an opponent center-surround organization of synaptic connections (``on center'' and (``off center''). These units compute maxima and minima of energy at different frequencies. The next two sets of neutrons compute the temporal boundaries (``on'') and ``off'') and the following two the movement of the energy maxima (formants) up or down the frequency axis. There are in addition ``hyperacuity'' units which expand the frequency resolution to 36, other units tuned to a particular range of duration of the ``on center'' units and others tuned exclusively to very low energy sounds. In order to recognize speech sounds at the phoneme or diphone level, the set of primitives belonging to the phoneme is decoded such that only one neuron or a non-overlapping group of neurons fire when the sound pattern is present at the input. For display and translation into phonetic symbols the output from these neurons is fed into an EPROM decoder and computer which displays in real time a phonetic representation of the speech input.

  17. The Müller-Lyer Illusion in a computational model of biological object recognition.

    Directory of Open Access Journals (Sweden)

    Astrid Zeman

    Full Text Available Studying illusions provides insight into the way the brain processes information. The Müller-Lyer Illusion (MLI is a classical geometrical illusion of size, in which perceived line length is decreased by arrowheads and increased by arrowtails. Many theories have been put forward to explain the MLI, such as misapplied size constancy scaling, the statistics of image-source relationships and the filtering properties of signal processing in primary visual areas. Artificial models of the ventral visual processing stream allow us to isolate factors hypothesised to cause the illusion and test how these affect classification performance. We trained a feed-forward feature hierarchical model, HMAX, to perform a dual category line length judgment task (short versus long with over 90% accuracy. We then tested the system in its ability to judge relative line lengths for images in a control set versus images that induce the MLI in humans. Results from the computational model show an overall illusory effect similar to that experienced by human subjects. No natural images were used for training, implying that misapplied size constancy and image-source statistics are not necessary factors for generating the illusion. A post-hoc analysis of response weights within a representative trained network ruled out the possibility that the illusion is caused by a reliance on information at low spatial frequencies. Our results suggest that the MLI can be produced using only feed-forward, neurophysiological connections.

  18. Similarity measures for face recognition

    CERN Document Server

    Vezzetti, Enrico

    2015-01-01

    Face recognition has several applications, including security, such as (authentication and identification of device users and criminal suspects), and in medicine (corrective surgery and diagnosis). Facial recognition programs rely on algorithms that can compare and compute the similarity between two sets of images. This eBook explains some of the similarity measures used in facial recognition systems in a single volume. Readers will learn about various measures including Minkowski distances, Mahalanobis distances, Hansdorff distances, cosine-based distances, among other methods. The book also summarizes errors that may occur in face recognition methods. Computer scientists "facing face" and looking to select and test different methods of computing similarities will benefit from this book. The book is also useful tool for students undertaking computer vision courses.

  19. Detection Efficiency of Microcalcification using Computer Aided Diagnosis in the Breast Ultrasonography Images

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jin Soo; Ko, Seong Jin; Kang, Se Sik; Kim, Jung Hoon; Choi, Seok Yoon; Kim, Chang Soo [Dept. of Radiological Science, Catholic University of Pusan, Pusan (Korea, Republic of); Park, Hyung Hu [Dept. of Health Science, Graduate School of Kosin University, Pusan (Korea, Republic of)

    2012-09-15

    Digital Mammography makes it possible to reproduce the entire breast image. And it is used to detect microcalcification and mass which are the most important point of view of nonpalpable early breast cancer, so it has been used as the primary screening test of breast disease. It is reported that microcalcification of breast lesion is important in diagnosis of early breast cancer. In this study, six types of texture features algorithms are used to detect microcalcification on breast US images and the study has analyzed recognition rate of lesion between normal US images and other US images which microcalification is seen. As a result of the experiment, Computer aided diagnosis recognition rate that distinguishes mammography and breast US disease was considerably high 70-98%. The average contrast and entropy parameters were low in ROC analysis, but sensitivity and specificity of four types parameters were over 90%. Therefore it is possible to detect microcalcification on US images. If not only six types of texture features algorithms but also the research of additional parameter algorithm is being continually proceeded and basis of practical use on CAD is being prepared, it can be a important meaning as pre-reading. Also, it is considered very useful things for early diagnosis of breast cancer.

  20. A novel handwritten character recognition system using gradient ...

    Indian Academy of Sciences (India)

    ing aspect ratio, position of centroid and ratio of pixels on the vertical halves of a character image. The recognition accuracy of 99.78% was achieved with minimum computational and storage requirement. Keywords. Gradient; RLC; handwritten character recognition; quadratic classi- fiers; MLP. 1. Introduction. Recognition ...

  1. Recognition of human face images by the free flying wasp Vespula vulgaris

    Directory of Open Access Journals (Sweden)

    Aurore Avarguès-Weber

    2017-08-01

    Full Text Available The capacity to recognize perceptually similar complex visual stimuli such as human faces has classically been thought to require a large primate, and/or mammalian brain with neurobiological adaptations. However, recent work suggests that the relatively small brain of a paper wasp, Polistes fuscatus, possesses specialized face processing capabilities. In parallel, the honeybee, Apis mellifera, has been shown to be able to rely on configural learning for extensive visual learning, thus converging with primate visual processing. Therefore, the honeybee may be able to recognize human faces, and show sophisticated learning performance due to its foraging lifestyle involving visiting and memorizing many flowers. We investigated the visual capacities of the widespread invasive wasp Vespula vulgaris, which is unlikely to have any specialization for face processing. Freely flying individual wasps were trained in an appetitive-aversive differential conditioning procedure to discriminate between perceptually similar human face images from a standard face recognition test. The wasps could then recognize the target face from novel dissimilar or similar human faces, but showed a significant drop in performance when the stimuli were rotated by 180°, thus paralleling results acquired on a similar protocol with honeybees. This result confirms that a general visual system can likely solve complex recognition tasks, the first stage to evolve a visual expertise system to face recognition, even in the absence of neurobiological or behavioral specialization.

  2. Mobile Imaging and Computing for Intelligent Structural Damage Inspection

    Directory of Open Access Journals (Sweden)

    ZhiQiang Chen

    2014-01-01

    Full Text Available Optical imaging is a commonly used technique in civil engineering for aiding the archival of damage scenes and more recently for image analysis-based damage quantification. However, the limitations are evident when applying optical imaging in the field. The most significant one is the lacking of computing and processing capability in the real time. The advancement of mobile imaging and computing technologies provides a promising opportunity to change this norm. This paper first provides a timely introduction of the state-of-the-art mobile imaging and computing technologies for the purpose of engineering application development. Further we propose a mobile imaging and computing (MIC framework for conducting intelligent condition assessment for constructed objects, which features in situ imaging and real-time damage analysis. This framework synthesizes advanced mobile technologies with three innovative features: (i context-enabled image collection, (ii interactive image preprocessing, and (iii real-time image analysis and analytics. Through performance evaluation and field experiments, this paper demonstrates the feasibility and efficiency of the proposed framework.

  3. COMPUTATION OF IMAGE SIMILARITY WITH TIME SERIES

    Directory of Open Access Journals (Sweden)

    V. Balamurugan

    2011-11-01

    Full Text Available Searching for similar sequence in large database is an important task in temporal data mining. Similarity search is concerned with efficiently locating subsequences or whole sequences in large archives of sequences. It is useful in typical data mining applications and it can be easily extended to image retrieval. In this work, time series similarity analysis that involves dimensionality reduction and clustering is adapted on digital images to find similarity between them. The dimensionality reduced time series is represented as clusters by the use of K-Means clustering and the similarity distance between two images is found by finding the distance between the signatures of their clusters. To quantify the extent of similarity between two sequences, Earth Mover’s Distance (EMD is used. From the experiments on different sets of images, it is found that this technique is well suited for measuring the subjective similarity between two images.

  4. SuperPixel based mid-level image description for image recognition

    NARCIS (Netherlands)

    Tasli, H.E.; Sicre, R.; Gevers, T.

    2015-01-01

    This study proposes a mid-level feature descriptor and aims to validate improvement on image classification and retrieval tasks. In this paper, we propose a method to explore the conventional feature extraction techniques in the image classification pipeline from a different perspective where

  5. Seventh Medical Image Computing and Computer Assisted Intervention Conference (MICCAI 2012)

    CERN Document Server

    Miller, Karol; Nielsen, Poul; Computational Biomechanics for Medicine : Models, Algorithms and Implementation

    2013-01-01

    One of the greatest challenges for mechanical engineers is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, biomedical sciences, and medicine. This book is an opportunity for computational biomechanics specialists to present and exchange opinions on the opportunities of applying their techniques to computer-integrated medicine. Computational Biomechanics for Medicine: Models, Algorithms and Implementation collects the papers from the Seventh Computational Biomechanics for Medicine Workshop held in Nice in conjunction with the Medical Image Computing and Computer Assisted Intervention conference. The topics covered include: medical image analysis, image-guided surgery, surgical simulation, surgical intervention planning, disease prognosis and diagnostics, injury mechanism analysis, implant and prostheses design, and medical robotics.

  6. Accelerating Computer-Based Recognition of Fynbos Leaves Using a Graphics Processing Unit

    Directory of Open Access Journals (Sweden)

    Simon Lucas Winberg

    2017-12-01

    Full Text Available The Cape Floristic Kingdom (CFK is the most diverse floristic kingdom in the world and has been declared an international heritage site. However, it is under threat from wild fires and invasive species. Much of the work of managing this natural resource, such as removing alien vegetation or fighting wild fires, is done by volunteers and casual workers. Many fynbos species, for which the Table Mountain National Park is known, are difficult to identify, particularly by non-expert volunteers. Accurate and fast identification of plant species would be beneficial in these contexts. The Fynbos Leaf Optical Recognition Application (FLORA was thus developed to assist in the recognition of plants of the CFK. The first version of FLORA was developed as a rapid prototype in MATLAB; it utilized sequential algorithms to identify plant leaves, and much of this code was interpreted M files. The initial implementation suffered from slow performance, though, and could not run as a lightweight standalone executable, making it cumbersome. FLORA was thus re-developed as a standalone C++ version that was subsequently enhanced further by accelerating critical routines, by running them on a graphics processing unit (GPU. This paper presents the design and testing of both the C++ version and the GPU-accelerated version of FLORA. Comparative testing was done on all three versions of FLORA, viz., the original MATLAB prototype, the C++ non-accelerated version, and the C++ GPU-accelerated version to show the performance and accuracy of the different versions. The accuracy of the predictions remained consistent across versions. The C++ version was noticeable faster than the original prototype, achieving an average speed-up of 8.7 for high-resolution 3456x2304 pixel images. The GPU-accelerated version was even faster, saving 51.85 ms on average for high-resolution images. Such a time saving would be perceptible for batch processing, such as rebuilding feature descriptors for

  7. Real-time computer treatment of THz passive device images with the high image quality

    Science.gov (United States)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2012-06-01

    We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.

  8. Integration of an industrial robot with the systems for image and voice recognition

    Directory of Open Access Journals (Sweden)

    Tasevski Jovica

    2013-01-01

    Full Text Available The paper reports a solution for the integration of the industrial robot ABB IRB140 with the system for automatic speech recognition (ASR and the system for computer vision. The robot has the task to manipulate the objects placed randomly on a pad lying on a table, and the computer vision system has to recognize their characteristics (shape, dimension, color, position, and orientation. The ASR system has a task to recognize human speech and use it as a command to the robot, so the robot can manipulate the objects. [Projekat Ministarstva nauke Republike Srbije, br. III44008: Design of Robots as Assistive Technology for the Treatment of Children with Developmental Disorders i br. TR32035: Development of Dialogue Systems for Serbian and other South Slavic Languages

  9. Fundamental remote sensing science research program. Part 1: Status report of the mathematical pattern recognition and image analysis project

    Science.gov (United States)

    Heydorn, R. D.

    1984-01-01

    The Mathematical Pattern Recognition and Image Analysis (MPRIA) Project is concerned with basic research problems related to the study of the Earth from remotely sensed measurement of its surface characteristics. The program goal is to better understand how to analyze the digital image that represents the spatial, spectral, and temporal arrangement of these measurements for purposing of making selected inference about the Earth.

  10. Advantages and disadvantages of computer imaging in cosmetic surgery.

    Science.gov (United States)

    Koch, R J; Chavez, A; Dagum, P; Newman, J P

    1998-02-01

    Despite the growing popularity of computer imaging systems, it is not clear whether the medical and legal advantages of using such a system outweigh the disadvantages. The purpose of this report is to evaluate these aspects, and provide some protective guidelines in the use of computer imaging in cosmetic surgery. The positive and negative aspects of computer imaging from a medical and legal perspective are reviewed. Also, specific issues are examined by a legal panel. The greatest advantages are potential problem patient exclusion, and enhanced physician-patient communication. Disadvantages include cost, user learning curve, and potential liability. Careful use of computer imaging should actually reduce one's liability when all aspects are considered. Recommendations for such use and specific legal issues are discussed.

  11. X-ray Computed Tomography Image Quality Indicator (IQI) Development

    Data.gov (United States)

    National Aeronautics and Space Administration — Phase one of the program is to identify suitable x-ray Computed Tomography (CT) Image Quality Indicator (IQI) design(s) that can be used to adequately capture CT...

  12. Dictionary of computer vision and image processing

    National Research Council Canada - National Science Library

    Fisher, R. B

    2014-01-01

    ... been identified for inclusion since the current edition was published. Revised to include an additional 1000 new terms to reflect current updates, which includes a significantly increased focus on image processing terms, as well as machine learning terms...

  13. Recognition of a rare intrathoracic rib with computed tomography: a case report

    Science.gov (United States)

    Abdollahifar, Mohammad-Amin; Bayat, Mohammad; Masteri Farahani, Reza; Abbaszadeh, Hojjat-Allah

    2017-01-01

    One of the uncommon congenital variations is intrathoracic rib which a normal, a bifid, or an accessory rib lies within the thoracic cavity that is founded accidentally. Clinically, in most cases they are without symptoms; however, it may cause intrathoracic problems therefore it is important for radiologists and physicians to identify to prevent of excessive intervention and treatment during imaging diagnostic techniques of thoracic problems. In this report, we provide the case of a rare presentation of an intrathoracic rib in a 3-year-old boy arising from the inferior portion of a second rib based on findings from computed tomography. To our knowledge, this is only the second reported case of this type of intrathoracic rib that demonstrated with computed tomography. PMID:28417058

  14. Automated alignment system for optical wireless communication systems using image recognition.

    Science.gov (United States)

    Brandl, Paul; Weiss, Alexander; Zimmermann, Horst

    2014-07-01

    In this Letter, we describe the realization of a tracked line-of-sight optical wireless communication system for indoor data distribution. We built a laser-based transmitter with adaptive focus and ray steering by a microelectromechanical systems mirror. To execute the alignment procedure, we used a CMOS image sensor at the transmitter side and developed an algorithm for image recognition to localize the receiver's position. The receiver is based on a self-developed optoelectronic integrated chip with low requirements on the receiver optics to make the system economically attractive. With this system, we were able to set up the communication link automatically without any back channel and to perform error-free (bit error rate <10⁻⁹) data transmission over a distance of 3.5 m with a data rate of 3 Gbit/s.

  15. A Review of High-Performance Computational Strategies for Modeling and Imaging of Electromagnetic Induction Data

    Science.gov (United States)

    Newman, Gregory A.

    2014-01-01

    Many geoscientific applications exploit electrostatic and electromagnetic fields to interrogate and map subsurface electrical resistivity—an important geophysical attribute for characterizing mineral, energy, and water resources. In complex three-dimensional geologies, where many of these resources remain to be found, resistivity mapping requires large-scale modeling and imaging capabilities, as well as the ability to treat significant data volumes, which can easily overwhelm single-core and modest multicore computing hardware. To treat such problems requires large-scale parallel computational resources, necessary for reducing the time to solution to a time frame acceptable to the exploration process. The recognition that significant parallel computing processes must be brought to bear on these problems gives rise to choices that must be made in parallel computing hardware and software. In this review, some of these choices are presented, along with the resulting trade-offs. We also discuss future trends in high-performance computing and the anticipated impact on electromagnetic (EM) geophysics. Topics discussed in this review article include a survey of parallel computing platforms, graphics processing units to multicore CPUs with a fast interconnect, along with effective parallel solvers and associated solver libraries effective for inductive EM modeling and imaging.

  16. Students’ needs of Computer Science: learning about image processing

    Directory of Open Access Journals (Sweden)

    Juana Marlen Tellez Reinoso

    2009-12-01

    Full Text Available To learn the treatment to image, specifically in the application Photoshop Marinates is one of the objectives in the specialty of Degree in Education, Computer Sciencie, guided to guarantee the preparation of the students as future professional, being able to reach in each citizen of our country an Integral General Culture. With that purpose a computer application is suggested, of tutorial type, entitled “Learning Treatment to Image".

  17. Computed Tomography - "The Changing Role of Maxillofacial Imaging"

    Directory of Open Access Journals (Sweden)

    Ambika Gupta

    2005-01-01

    Full Text Available The past three decades have witnessed great advances in the field of diagnostic imaging. Many of these advances have greatly facilitated the diagnosis and treatment of a number of maxillofacial disorders. These modalities while employing different physical principles, are often complimentary, providing valuable information about different aspects of a given disease process. Computed Tomography, 3-D Computed Tomography, and Magnetic Resonance Imaging are some such valuable adjuncts, which have opened new dimensions in the diagnosis of maxillofacial disorders.

  18. Human fatigue expression recognition through image-based dynamic multi-information and bimodal deep learning

    Science.gov (United States)

    Zhao, Lei; Wang, Zengcai; Wang, Xiaojin; Qi, Yazhou; Liu, Qing; Zhang, Guoxin

    2016-09-01

    Human fatigue is an important cause of traffic accidents. To improve the safety of transportation, we propose, in this paper, a framework for fatigue expression recognition using image-based facial dynamic multi-information and a bimodal deep neural network. First, the landmark of face region and the texture of eye region, which complement each other in fatigue expression recognition, are extracted from facial image sequences captured by a single camera. Then, two stacked autoencoder neural networks are trained for landmark and texture, respectively. Finally, the two trained neural networks are combined by learning a joint layer on top of them to construct a bimodal deep neural network. The model can be used to extract a unified representation that fuses landmark and texture modalities together and classify fatigue expressions accurately. The proposed system is tested on a human fatigue dataset obtained from an actual driving environment. The experimental results demonstrate that the proposed method performs stably and robustly, and that the average accuracy achieves 96.2%.

  19. Use of personal computer image for processing a magnetic resonance image (MRI)

    International Nuclear Information System (INIS)

    Yamamoto, Tetsuo; Tanaka, Hitoshi

    1988-01-01

    Image processing of MR imaging was attempted by using a popular personal computer as 16-bit model. The computer processed the images on a 256 x 256 matrix and 512 x 512 matrix. The softwer languages for image-processing were those of Macro-Assembler performed by (MS-DOS). The original images, acuired with an 0.5 T superconducting machine (VISTA MR 0.5 T, Picker International) were transfered to the computer by the flexible disket. Image process are the display of image to monitor, other the contrast enhancement, the unsharped mask contrast enhancement, the various filter process, the edge detections or the color histogram was obtained in 1.6 sec to 67 sec, indicating that commercialzed personal computer had ability for routine clinical purpose in MRI-processing. (author)

  20. Application of abstract harmonic analysis to the high-speed recognition of images

    Science.gov (United States)

    Usikov, D. A.

    1979-01-01

    Methods are constructed for rapidly computing correlation functions using the theory of abstract harmonic analysis. The theory developed includes as a particular case the familiar Fourier transform method for a correlation function which makes it possible to find images which are independent of their translation in the plane. Two examples of the application of the general theory described are the search for images, independent of their rotation and scale, and the search for images which are independent of their translations and rotations in the plane.

  1. Noise simulation in cone beam CT imaging with parallel computing

    International Nuclear Information System (INIS)

    Tu, S.-J.; Shaw, Chris C; Chen, Lingyun

    2006-01-01

    We developed a computer noise simulation model for cone beam computed tomography imaging using a general purpose PC cluster. This model uses a mono-energetic x-ray approximation and allows us to investigate three primary performance components, specifically quantum noise, detector blurring and additive system noise. A parallel random number generator based on the Weyl sequence was implemented in the noise simulation and a visualization technique was accordingly developed to validate the quality of the parallel random number generator. In our computer simulation model, three-dimensional (3D) phantoms were mathematically modelled and used to create 450 analytical projections, which were then sampled into digital image data. Quantum noise was simulated and added to the analytical projection image data, which were then filtered to incorporate flat panel detector blurring. Additive system noise was generated and added to form the final projection images. The Feldkamp algorithm was implemented and used to reconstruct the 3D images of the phantoms. A 24 dual-Xeon PC cluster was used to compute the projections and reconstructed images in parallel with each CPU processing 10 projection views for a total of 450 views. Based on this computer simulation system, simulated cone beam CT images were generated for various phantoms and technique settings. Noise power spectra for the flat panel x-ray detector and reconstructed images were then computed to characterize the noise properties. As an example among the potential applications of our noise simulation model, we showed that images of low contrast objects can be produced and used for image quality evaluation

  2. A space variant maximum average correlation height (MACH) filter for object recognition in real time thermal images for security applications

    Science.gov (United States)

    Gardezi, Akber; Alkandri, Ahmed; Birch, Philip; Young, Rupert; Chatwin, Chris

    2010-10-01

    We propose a space variant Maximum Average Correlation Height (MACH) filter which can be locally modified depending upon its position in the input frame. This can be used to detect targets in an environment from varying ranges and in unpredictable weather conditions using thermal images. It enables adaptation of the filter dependant on background heat signature variances and also enables the normalization of the filter energy levels. The kernel can be normalized to remove a non-uniform brightness distribution if this occurs in different regions of the image. The main constraint in this implementation is the dependence on computational ability of the system. This can be minimized with the recent advances in optical correlators using scanning holographic memory, as proposed by Birch et al. [1] In this paper we describe the discrimination abilities of the MACH filter against background heat signature variances and tolerance to changes in scale and calculate the improvement in detection capabilities with the introduction of a nonlinearity. We propose a security detection system which exhibits a joint process where human and an automated pattern recognition system contribute to the overall solution for the detection of pre-defined targets.

  3. Natural scene recognition with increasing time-on-task: the role of typicality and global image properties.

    Science.gov (United States)

    Csathó, Á; van der Linden, D; Gács, B

    2015-01-01

    Human observers can recognize natural images very effectively. Yet, in the literature there is a debate about the extent to which the recognition of natural images requires controlled attentional processing. In the present study we address this topic by testing whether natural scene recognition is affected by mental fatigue. Mental fatigue is known to particularly compromise high-level, controlled attentional processing of local features. Effortless, automatic processing of more global features of an image stays relatively intact, however. We conducted a natural image categorization experiment (N = 20) in which mental fatigue was induced by time-on-task (ToT). Stimuli were images from 5 natural scene categories. Semantic typicality (high or low) and the magnitude of 7 global image properties were determined for each image in separate rating experiments. Significant performance effects of typicality and global properties on scene recognition were found, but, despite a general decline in performance, these effects remained unchanged with increasing ToT. The findings support the importance of the global property processing in natural scene recognition and suggest that this process is insensitive to mental fatigue.

  4. [Filing and processing systems of ultrasonic images in personal computers].

    Science.gov (United States)

    Filatov, I A; Bakhtin, D A; Orlov, A V

    1994-01-01

    The paper covers the software pattern for the ultrasonic image filing and processing system. The system records images on a computer display in real time or still, processes them by local filtration techniques, makes different measurements and stores the findings in the graphic database. It is stressed that the database should be implemented as a network version.

  5. 1st International Conference on Computer Vision and Image Processing

    CERN Document Server

    Kumar, Sanjeev; Roy, Partha; Sen, Debashis

    2017-01-01

    This edited volume contains technical contributions in the field of computer vision and image processing presented at the First International Conference on Computer Vision and Image Processing (CVIP 2016). The contributions are thematically divided based on their relation to operations at the lower, middle and higher levels of vision systems, and their applications. The technical contributions in the areas of sensors, acquisition, visualization and enhancement are classified as related to low-level operations. They discuss various modern topics – reconfigurable image system architecture, Scheimpflug camera calibration, real-time autofocusing, climate visualization, tone mapping, super-resolution and image resizing. The technical contributions in the areas of segmentation and retrieval are classified as related to mid-level operations. They discuss some state-of-the-art techniques – non-rigid image registration, iterative image partitioning, egocentric object detection and video shot boundary detection. Th...

  6. Computational optical biomedical spectroscopy and imaging

    CERN Document Server

    Musa, Sarhan M

    2015-01-01

    Applications of Vibrational Spectroscopic Imaging in Personal Care Studies; Guojin Zhang, Roger L. McMullen, Richard Mendelsohn, and Osama M. MusaFluorescence Bioimaging with Applications to Chemistry; Ufana Riaz and S.M. AshrafNew Trends in Immunohistochemical, Genome, and Metabolomics Imaging; G. Livanos, Aditi Deshpande, C. Narayan, Ying Na, T. Quang, T. Farrahi, R. Koglin, Suman Shrestha, M. Zervakis, and George C. GiakosDeveloping a Comprehensive Taxonomy for Human Cell Types; Richard Conroy and Vinay PaiFunctional Near-Infrared S

  7. Imaging and pattern recognition techniques applied to particulate solids material characterization in mineral processing

    International Nuclear Information System (INIS)

    Bonifazi, G.; La Marca, F.; Massacci, P.

    1999-01-01

    The characterization of particulate solids can be carried out by chemical and mineralogical analysis, or, in some cases, following a new approach based on the combined use of: i) imaging techniques to detect the surface features of the particles, and ii) pattern recognition procedures, to identify and classify the mineralogical composition on the bases of the previously detected 'pictorial' features. The aim of this methodology is to establish a correlation between image parameters (texture and color) and physical chemical parameters characterizing the set of particles to be evaluated. The technique was applied to characterize the raw-ore coming from a deposit of mineral sands of three different lithotypes. An appropriate number of samples for each lithotype has been collected. A vector of attributes (pattern vector), by either texture and color parameters, has been associated to each sample. Image analysis demonstrated as the selected parameters are quite sensitive to the conditions of image acquisition: in fact optical properties may be strongly influenced by physical condition, in terms of moisture content and optics set-up and lighting conditions. Standard conditions for acquisition have been selected according to the in situ conditions during sampling. To verify the reliability of the proposed methodology, images have been acquired under different conditions of humidity, focusing and illumination. In order to evaluate the influence of these parameters on image pictorial properties, textural analysis procedures have been applied to the image acquired from different samples. Data resulting from the processing have been used for remote controlling of the material fed to the mineral processing plant. (author)

  8. High-accuracy and robust face recognition system based on optical parallel correlator using a temporal image sequence

    Science.gov (United States)

    Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Kodate, Kashiko

    2005-09-01

    Face recognition is used in a wide range of security systems, such as monitoring credit card use, searching for individuals with street cameras via Internet and maintaining immigration control. There are still many technical subjects under study. For instance, the number of images that can be stored is limited under the current system, and the rate of recognition must be improved to account for photo shots taken at different angles under various conditions. We implemented a fully automatic Fast Face Recognition Optical Correlator (FARCO) system by using a 1000 frame/s optical parallel correlator designed and assembled by us. Operational speed for the 1: N (i.e. matching a pair of images among N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 seconds, including the pre/post processing. From trial 1: N identification experiments using FARCO, we acquired low error rates of 2.6% False Reject Rate and 1.3% False Accept Rate. By making the most of the high-speed data-processing capability of this system, much more robustness can be achieved for various recognition conditions when large-category data are registered for a single person. We propose a face recognition algorithm for the FARCO while employing a temporal image sequence of moving images. Applying this algorithm to a natural posture, a two times higher recognition rate scored compared with our conventional system. The system has high potential for future use in a variety of purposes such as search for criminal suspects by use of street and airport video cameras, registration of babies at hospitals or handling of an immeasurable number of images in a database.

  9. Images as a basis for computer modelling

    Science.gov (United States)

    Beaufils, D.; LeTouzé, J.-C.; Blondel, F.-M.

    1994-03-01

    New computer technologies such as the graphics data tablet, video digitization and numerical methods, can be used for measurement and mathematical modelling in physics. Two programs dealing with newtonian mechanics and some of related scientific activities for A-level students are described.

  10. Image Analysis via Soft Computing: Prototype Applications at NASA KSC and Product Commercialization

    Science.gov (United States)

    Dominguez, Jesus A.; Klinko, Steve

    2011-01-01

    This slide presentation reviews the use of "soft computing" which differs from "hard computing" in that it is more tolerant of imprecision, partial truth, uncertainty, and approximation and its use in image analysis. Soft computing provides flexible information processing to handle real life ambiguous situations and achieve tractability, robustness low solution cost, and a closer resemblance to human decision making. Several systems are or have been developed: Fuzzy Reasoning Edge Detection (FRED), Fuzzy Reasoning Adaptive Thresholding (FRAT), Image enhancement techniques, and visual/pattern recognition. These systems are compared with examples that show the effectiveness of each. NASA applications that are reviewed are: Real-Time (RT) Anomaly Detection, Real-Time (RT) Moving Debris Detection and the Columbia Investigation. The RT anomaly detection reviewed the case of a damaged cable for the emergency egress system. The use of these techniques is further illustrated in the Columbia investigation with the location and detection of Foam debris. There are several applications in commercial usage: image enhancement, human screening and privacy protection, visual inspection, 3D heart visualization, tumor detections and x ray image enhancement.

  11. Image processing and computer graphics in radiology. Pt. A

    International Nuclear Information System (INIS)

    Toennies, K.D.

    1993-01-01

    The reports give a full review of all aspects of digital imaging in radiology which are of significance to image processing and the subsequent picture archiving and communication techniques. The review strongly clings to practice and illustrates the various contributions from specialized areas of the computer sciences, such as computer vision, computer graphics, database systems and information and communication systems, man-machine interactions and software engineering. Methods and models available are explained and assessed for their respective performance and value, and basic principles are briefly explained. (DG) [de

  12. Image processing and computer graphics in radiology. Pt. B

    International Nuclear Information System (INIS)

    Toennies, K.D.

    1993-01-01

    The reports give a full review of all aspects of digital imaging in radiology which are of significance to image processing and the subsequent picture archiving and communication techniques. The review strongly clings to practice and illustrates the various contributions from specialized areas of the computer sciences, such as computer vision, computer graphics, database systems and information and communication systems, man-machine interactions and software engineering. Methods and models available are explained and assessed for their respective performance and value, and basic principles are briefly explained. (DG) [de

  13. Research on installation quality inspection system of high voltage customer metering device based on image recognition

    Science.gov (United States)

    He, Bei; Yang, Fu-li; Tao, Xue-dan; Chang, Shi-liang; Wu, Kang

    2017-11-01

    With the rapid development of the scale of the power grid, the site construction and the operations environment is more widespread and more complex. The installation work of the high-voltage customer metering device is heavy, which is not standardized. In addition, managers supervise the site construction progress only through the person in charge of each work phrase. It is inefficient and difficult to control the multi-team and multi-unit cross work. Therefore, it is necessary to establish a scientific system to detect the quality of installation and management practices to standardize installation work of the metering device. Based on the research of image recognition and target detection system, this paper presents a high-voltage customer metering device installation quality inspection system based on digital image processing, image feature extraction and SVM classification decision. The experimental results show that the proposed scheme is feasible. And it can be used to accurately extract the metering components in the image, which can be also accurately and quickly classified. Our method is of great significance for the implementation and monitoring of the power system in installation and specification

  14. Preface of 16th International conference on Defects, Recognition, Imaging and Physics in Semiconductors

    Science.gov (United States)

    Yang, Deren; Xu, Ke

    2016-11-01

    The 16th International conference on Defects-Recognition, Imaging and Physics in Semiconductors (DRIP-XVI) was held at the Worldhotel Grand Dushulake in Suzhou, China from 6th to 10th September 2015, around the 30th anniversary of the first DRIP conference. It was hosted by the Suzhou Institute of Nano-tech and Nano-bionics (SINANO), Chinese Academy of Sciences. On this occasion, about one hundred participants from nineteen countries attended the event. And a wide range of subjects were addressed during the conference: physics of point and extended defects in semiconductors: origin, electrical, optical and magnetic properties of defects; diagnostics techniques of crystal growth and processing of semiconductor materials (in-situ and process control); device imaging and mapping to evaluate performance and reliability; defect analysis in degraded optoelectronic and electronic devices; imaging techniques and instruments (proximity probe, x-ray, electron beam, non-contact electrical, optical and thermal imaging techniques, etc.); new frontiers of atomic-scale-defect assessment (STM, AFM, SNOM, ballistic electron energy microscopy, TEM, etc.); new approaches for multi-physic-parameter characterization with Nano-scale space resolution. Within these subjects, there were 58 talks, of which 18 invited, and 50 posters.

  15. Image Quality Assessment for Fake Biometric Detection: Application to Iris, Fingerprint, and Face Recognition.

    Science.gov (United States)

    Galbally, Javier; Marcel, Sébastien; Fierrez, Julian

    2014-02-01

    To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. In this paper, we present a novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment. The proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results, obtained on publicly available data sets of fingerprint, iris, and 2D face, show that the proposed method is highly competitive compared with other state-of-the-art approaches and that the analysis of the general image quality of real biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits.

  16. The diffractive achromat full spectrum computational imaging with diffractive optics

    KAUST Repository

    Peng, Yifan

    2016-07-11

    Diffractive optical elements (DOEs) have recently drawn great attention in computational imaging because they can drastically reduce the size and weight of imaging devices compared to their refractive counterparts. However, the inherent strong dispersion is a tremendous obstacle that limits the use of DOEs in full spectrum imaging, causing unacceptable loss of color fidelity in the images. In particular, metamerism introduces a data dependency in the image blur, which has been neglected in computational imaging methods so far. We introduce both a diffractive achromat based on computational optimization, as well as a corresponding algorithm for correction of residual aberrations. Using this approach, we demonstrate high fidelity color diffractive-only imaging over the full visible spectrum. In the optical design, the height profile of a diffractive lens is optimized to balance the focusing contributions of different wavelengths for a specific focal length. The spectral point spread functions (PSFs) become nearly identical to each other, creating approximately spectrally invariant blur kernels. This property guarantees good color preservation in the captured image and facilitates the correction of residual aberrations in our fast two-step deconvolution without additional color priors. We demonstrate our design of diffractive achromat on a 0.5mm ultrathin substrate by photolithography techniques. Experimental results show that our achromatic diffractive lens produces high color fidelity and better image quality in the full visible spectrum. © 2016 ACM.

  17. Computer vision approaches to medical image analysis. Revised papers

    International Nuclear Information System (INIS)

    Beichel, R.R.; Sonka, M.

    2006-01-01

    This book constitutes the thoroughly refereed post proceedings of the international workshop Computer Vision Approaches to Medical Image Analysis, CVAMIA 2006, held in Graz, Austria in May 2006 as a satellite event of the 9th European Conference on Computer Vision, EECV 2006. The 10 revised full papers and 11 revised poster papers presented together with 1 invited talk were carefully reviewed and selected from 38 submissions. The papers are organized in topical sections on clinical applications, image registration, image segmentation and analysis, and the poster session. (orig.)

  18. Application platform 'ICX' designed for computer assisted functional image analyzer

    International Nuclear Information System (INIS)

    Kinosada, Yasutomi; Hattori, Takao; Yonezawa, Kazuo; Tojo, Shigenori.

    1994-01-01

    Recent clinical imaging modalities such as X-CT, MRI, SPECT and so on make it easy to obtain various functional images of the human body because of the rapid technical progress of modalities. But the technical progress such as fast imaging technique and 3D volume scanning technique have brought new problems for both medical doctors and technical staffs. They are the increase of both number of images and opportunities of the image processing for 3D presentations. Furthermore, it has been left difficult and troublesome to analyze these functional images. In this study, we have developed the application platform ICX (Independent Console based on X-window system) designed for a computer assisted functional image analyzer under the different concept from the conventional medical image processing workstations. ICX can manage clinical images from various modalities of imaging systems via Ethernet LAN and assist users to analyze or process these images easily with ICX's application programs or some commercial applications. ICX works as a diagnostic console, a personal PACS and a functional image analyzer, but independently works with imaging modalities. Many object-oriented image analysis and processing tools are available and they can be driven in any situations by users. ICX is a new type of the workstation and seems useful in the recent medical fields. (author)

  19. Robust Face Recognition by Computing Distances from Multiple Histograms of Oriented Gradients

    NARCIS (Netherlands)

    Karaaba, Mahir; Surinta, Olarik; Schomaker, Lambertus; Wiering, Marco

    2015-01-01

    The Single Sample per Person Problem is a challenging problem for face recognition algorithms. Patch-based methods have obtained some promising results for this problem. In this paper, we propose a new face recognition algorithm that is based on a combination of different histograms of oriented

  20. Modified computational integral imaging-based double image encryption using fractional Fourier transform

    Science.gov (United States)

    Li, Xiao-Wei; Lee, In-Kwon

    2015-03-01

    In this paper, we propose an image encryption technique to simultaneously encrypt double or multiple images into one encrypted image using computational integral imaging (CII) and fractional Fourier transform (FrFT). In the encryption, each of the input plane images are located at different positions along a pickup plane, and simultaneously recorded in the form of an elemental image array (EIA) through a lenslet array. The recorded EIA to be encrypted is multiplied by FrFT with two different fractional orders. In order to mitigate the drawbacks of occlusion noise in computational integral imaging reconstruction (CIIR), the plane images can be reconstructed using a modified CIIR technique. To further improve the solution of the reconstructed plane images, a block matching algorithm is also introduced. Numerical simulation results verify the feasibility and effectiveness of the proposed method.