Sample records for sals image processing

  1. Stakeholder perceptions of decision-making process on marine biodiversity conservation on sal island (Cape Verde

    Directory of Open Access Journals (Sweden)

    Jorge Ramos


    Full Text Available In the Sal Island (Cape Verde there is a growing involvement, will and investment in the creation of tourism synergies. However, much of the economic potential of the island can be found submerged in the sea: it is its intrinsic 'biodiversity'. Due to this fact, and in order to balance environmental safety and human pressure, it has been developed a strategy addressing both diving and fishing purposes. That strategy includes the deployment of several artificial reefs (ARs around the island. In order to allocate demand for diving and fishing purposes, we have developed a socio-economic research approach addressing the theme of biodiversity and reefs (both natural and artificial and collected expectations from AR users by means of an inquiry method. It is hypothesized a project where some management measures are proposed aiming marine biodiversity conservation. Using the methodology named as analytic hierarchy process (AHP it was scrutinized stakeholders' perception on the best practice for marine biodiversity conservation in the Sal Island. The results showed that to submerge obsolete structures in rocky or mixed areas have a high potential, but does not gathers consensuality. As an overall conclusion, it seems that limitation of activities is the preferred management option to consider in the future.Na Ilha do Sal (Cabo Verde existe um crescente envolvimento, vontade e investimento na criação de sinergias turísticas. Contudo, muito do potencial económico da ilha está submerso - a biodiversidade marinha. Devido a este facto, e tendo em vista promover a sustentabilidade ambiental associada ao eco-turismo, vem sendo desenvolvida uma estratégia direccionada, quer ao mergulho, quer à pesca. Esta estratégia inclui a implantação de vários recifes artificiais (RA na Baía de Santa Maria. De modo a alocar a procura para propósitos como o mergulho e a pesca, desenvolvemos um plano de pesquisa socio-económica relativo ao tema da biodiversidade

  2. Using the SAL technique for spatial verification of cloud processes: A sensitivity analysis

    CERN Document Server

    Weniger, Michael


    The feature based spatial verification method SAL is applied to cloud data, i.e. two-dimensional spatial fields of total cloud cover and spectral radiance. Model output is obtained from the COSMO-DE forward operator SynSat and compared to SEVIRI satellite data. The aim of this study is twofold. First, to assess the applicability of SAL to this kind of data, and second, to analyze the role of external object identification algorithms (OIA) and the effects of observational uncertainties on the resulting scores. As a feature based method, SAL requires external OIA. A comparison of three different algorithms shows that the threshold level, which is a fundamental part of all studied algorithms, induces high sensitivity and unstable behavior of object dependent SAL scores (i.e. even very small changes in parameter values can lead to large changes in the resulting scores). An in-depth statistical analysis reveals significant effects on distributional quantities commonly used in the interpretation of SAL, e.g. median...

  3. Image processing

    NARCIS (Netherlands)

    van der Heijden, Ferdinand; Spreeuwers, Lieuwe Jan; Blanken, Henk; Vries de, A.P.; Blok, H.E.; Feng, L; Feng, L.


    The field of image processing addresses handling and analysis of images for many purposes using a large number of techniques and methods. The applications of image processing range from enhancement of the visibility of cer- tain organs in medical images to object recognition for handling by

  4. Image Processing Research (United States)


    Picture Processing," USCEE Report No. 530, 1974, pp. 11-19. 4.7 Spectral Sensitivity Estimation of a Color Image Scanner Clanton E. Mancill and William...Projects: the improvement of image fidelity and presentation format; (3) Image Data Extraction Projects: the recognition of objects within pictures ...representation; (5) Image Proc- essing Systems Projects: the development of image processing hardware and software support systems. 14. Key words : Image

  5. Hyperspectral image processing methods (United States)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  6. Medical image processing

    CERN Document Server

    Dougherty, Geoff


    This book is designed for end users in the field of digital imaging, who wish to update their skills and understanding with the latest techniques in image analysis. This book emphasizes the conceptual framework of image analysis and the effective use of image processing tools. It uses applications in a variety of fields to demonstrate and consolidate both specific and general concepts, and to build intuition, insight and understanding. Although the chapters are essentially self-contained they reference other chapters to form an integrated whole. Each chapter employs a pedagogical approach to e

  7. Biomedical Image Processing

    CERN Document Server

    Deserno, Thomas Martin


    In modern medicine, imaging is the most effective tool for diagnostics, treatment planning and therapy. Almost all modalities have went to directly digital acquisition techniques and processing of this image data have become an important option for health care in future. This book is written by a team of internationally recognized experts from all over the world. It provides a brief but complete overview on medical image processing and analysis highlighting recent advances that have been made in academics. Color figures are used extensively to illustrate the methods and help the reader to understand the complex topics.

  8. The image processing handbook

    CERN Document Server

    Russ, John C


    Now in its fifth edition, John C. Russ's monumental image processing reference is an even more complete, modern, and hands-on tool than ever before. The Image Processing Handbook, Fifth Edition is fully updated and expanded to reflect the latest developments in the field. Written by an expert with unequalled experience and authority, it offers clear guidance on how to create, select, and use the most appropriate algorithms for a specific application. What's new in the Fifth Edition? ·       A new chapter on the human visual process that explains which visual cues elicit a response from the vie

  9. Image processing occupancy sensor (United States)

    Brackney, Larry J.


    A system and method of detecting occupants in a building automation system environment using image based occupancy detection and position determinations. In one example, the system includes an image processing occupancy sensor that detects the number and position of occupants within a space that has controllable building elements such as lighting and ventilation diffusers. Based on the position and location of the occupants, the system can finely control the elements to optimize conditions for the occupants, optimize energy usage, among other advantages.

  10. Onboard image processing (United States)

    Martin, D. R.; Samulon, A. S.


    The possibility of onboard geometric correction of Thematic Mapper type imagery to make possible image registration is considered. Typically, image registration is performed by processing raw image data on the ground. The geometric distortion (e.g., due to variation in spacecraft location and viewing angle) is estimated by using a Kalman filter updated by correlating the received data with a small reference subimage, which has known location. Onboard image processing dictates minimizing the complexity of the distortion estimation while offering the advantages of a real time environment. In keeping with this, the distortion estimation can be replaced by information obtained from the Global Positioning System and from advanced star trackers. Although not as accurate as the conventional ground control point technique, this approach is capable of achieving subpixel registration. Appropriate attitude commands can be used in conjunction with image processing to achieve exact overlap of image frames. The magnitude of the various distortion contributions, the accuracy with which they can be measured in real time, and approaches to onboard correction are investigated.

  11. Robots and image processing (United States)

    Peterson, C. E.


    Developments in integrated circuit manufacture are discussed, with attention given to the current expectations of industrial automation. It is shown that the growing emphasis on image processing is a natural consequence of production requirements which have generated a small but significant range of vision applications. The state of the art in image processing is discussed, with the main research areas delineated. The main areas of application will be less in welding and diecasting than in assembly and machine tool loading, with vision becoming an ever more important facet of the installation. The two main approaches to processing images in a computer (depending on the aims of the project) are discussed. The first involves producing a system that does a specific task, the second is to achieve an understanding of some basic issues in object recognition.

  12. Geology And Image Processing (United States)

    Daily, Mike


    The design of digital image processing systems for geological applications will be driven by the nature and complexity of the intended use, by the types and quantities of data, and by systems considerations. Image processing will be integrated with geographic information systems (GIS) and data base management systems (DBMS). Dense multiband data sets from radar and multispectral scanners (MSS) will tax memory, bus, and processor architectures. Array processors and dedicated-function chips (VLSI/VHSIC) will allow the routine use of FFT and classification algorithms. As this geoprocessing capability becomes available to a larger segment of the geological community, user friendliness and smooth interaction will become a major concern.

  13. Preclinical safety evaluation of intravenously administered SAL200 containing the recombinant phage endolysin SAL-1 as a pharmaceutical ingredient. (United States)

    Jun, Soo Youn; Jung, Gi Mo; Yoon, Seong Jun; Choi, Yun-Jaie; Koh, Woo Suk; Moon, Kyoung Sik; Kang, Sang Hyeon


    Phage endolysins have received increasing attention as potent antibacterial agents. However, although safety evaluation is a prerequisite for the drug development process, a good laboratory practice (GLP)-compliant safety evaluation has not been reported for phage endolysins. A safety evaluation of intravenously administered SAL200 (containing phage endolysin SAL-1) was conducted according to GLP standards. No animals died in any of the safety evaluation studies. In general toxicity studies, intravenously administered SAL200 showed no sign of toxicity in rodent single- and repeated-dose toxicity studies. In the dog repeated-dose toxicity test, there were no abnormal findings, with the exception of transient abnormal clinical signs that were observed in some dogs when daily injection of SAL200 was continued for more than 1 week. In safety pharmacology studies, there were also no signs of toxicity in the central nervous and respiratory system function tests. In the cardiovascular function test, there were no abnormal findings in all tested dogs after the first and second administrations, but transient abnormalities were observed after the third and fourth administrations (2 or 3 weeks after the initial administration). All abnormal findings observed in these safety evaluation studies were slight to mild, were apparent only transiently after injection, and resolved quickly. The safety evaluation results for SAL200 support the implementation of an exploratory phase I clinical trial and underscore the potential of SAL200 as a new drug. We have designed an appropriate phase I clinical trial based on the results of this study.

  14. Hyperspectral image processing

    CERN Document Server

    Wang, Liguo


    Based on the authors’ research, this book introduces the main processing techniques in hyperspectral imaging. In this context, SVM-based classification, distance comparison-based endmember extraction, SVM-based spectral unmixing, spatial attraction model-based sub-pixel mapping, and MAP/POCS-based super-resolution reconstruction are discussed in depth. Readers will gain a comprehensive understanding of these cutting-edge hyperspectral imaging techniques. Researchers and graduate students in fields such as remote sensing, surveying and mapping, geosciences and information systems will benefit from this valuable resource.

  15. Introduction to computer image processing (United States)

    Moik, J. G.


    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  16. Salões de maio

    Directory of Open Access Journals (Sweden)

    Paulo Monteiro


    Full Text Available O texto traz uma visão geral sobre a documentação pertencente ao Arquivo Quirino da Silva, guardado no Centro Cultural São Paulo, principalmente sobre a parte que se refere aos "Salões de Maio", destacando fontes primárias e secundárias que poderão servir a futuras pesquisas na área de história da arte brasileira.

  17. Introduction to digital image processing

    CERN Document Server

    Pratt, William K


    CONTINUOUS IMAGE CHARACTERIZATION Continuous Image Mathematical Characterization Image RepresentationTwo-Dimensional SystemsTwo-Dimensional Fourier TransformImage Stochastic CharacterizationPsychophysical Vision Properties Light PerceptionEye PhysiologyVisual PhenomenaMonochrome Vision ModelColor Vision ModelPhotometry and ColorimetryPhotometryColor MatchingColorimetry ConceptsColor SpacesDIGITAL IMAGE CHARACTERIZATION Image Sampling and Reconstruction Image Sampling and Reconstruction ConceptsMonochrome Image Sampling SystemsMonochrome Image Reconstruction SystemsColor Image Sampling SystemsImage QuantizationScalar QuantizationProcessing Quantized VariablesMonochrome and Color Image QuantizationDISCRETE TWO-DIMENSIONAL LINEAR PROCESSING Discrete Image Mathematical Characterization Vector-Space Image RepresentationGeneralized Two-Dimensional Linear OperatorImage Statistical CharacterizationImage Probability Density ModelsLinear Operator Statistical RepresentationSuperposition and ConvolutionFinite-Area Superp...

  18. A Review on Image Processing


    Amandeep Kour; Vimal Kishore Yadav; Vikas Maheshwari; Deepak Prashar


    Image Processing includes changing the nature of an image in order to improve its pictorial information for human interpretation, for autonomous machine perception. Digital image processing is a subset of the electronic domain wherein the image is converted to an array of small integers, called pixels, representing a physical quantity such as scene radiance, stored in a digital memory, and processed by computer or other digital hardware. Interest in digital image processing methods stems from...

  19. scikit-image: image processing in Python

    Directory of Open Access Journals (Sweden)

    Stéfan van der Walt


    Full Text Available scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage,

  20. scikit-image: image processing in Python. (United States)

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony


    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage,

  1. Image Processing Diagnostics: Emphysema (United States)

    McKenzie, Alex


    Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.

  2. Smart Image Enhancement Process (United States)

    Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)


    Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.

  3. Image processing and recognition for biological images. (United States)

    Uchida, Seiichi


    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. © 2013 The Author Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.

  4. A modified SUnSAL-TV algorithm for hyperspectral unmixing based on spatial homogeneity analysis (United States)

    Yuqian, Wang; Zhenfeng, Shao; Lei, Zhang; Weixun, Zhou


    The sparse regression framework has been introduced by many works to solve the linear spectral unmixing problem due to the knowledge that a pixel is usually mixed by less endmembers compared with the endmembers in spectral libraries or the entire hyperspectral data sets. Traditional sparse unmixing techniques focus on analyzing the spectral properties of hyperspectral imagery without incorporating spatial information. But the integration of spatial information would be beneficial to promote the performance of the linear unmixing process. An algorithm called sparse unmixing via variable splitting augmented Lagrangian and total variation (SUnSAL-TV) adds a total variation spatial regularizer besides the sparsity-inducing regularizer to the final unmixing objective function. The total variation spatial regularization is helpful to promote the fractional abundance smoothness. However, the abundance smoothness varies in the image. In this paper, the spatial smoothness is estimated through homogeneity analysis. Then the spatial regularizer is weighted for each pixel by a homogeneity index. The modified algorithm, called homogeneity analysis based SUnSAL-TV (SUnSAL-TVH), integrates the spatial information with finer modelling of spatial smoothness and is supposed insensitive to the noise and more stable. Experiments on synthetic data sets are taken and indicate the validity of our algorithm.

  5. Image processing with ImageJ

    CERN Document Server

    Pascau, Javier


    The book will help readers discover the various facilities of ImageJ through a tutorial-based approach.This book is targeted at scientists, engineers, technicians, and managers, and anyone who wishes to master ImageJ for image viewing, processing, and analysis. If you are a developer, you will be able to code your own routines after you have finished reading this book. No prior knowledge of ImageJ is expected.

  6. Experimental formation of Pb, Sn, Ge and Sb sulfides, selenides and chlorides in the presence of sal ammoniac: A contribution to the understanding of the mineral formation processes in coal wastes self-burning (United States)

    Laufek, František; Veselovsky, František; Drábek, Milan; Kříbek, Bohdan; Klementová, Mariana


    The formation of sulfides, selenides and chlorides was experimentally studied at 800 or 900°C in the presence of sal ammoniac in a sealed silica glass tube. Synthetic PbS, PbSe, SnS, GeS, SnGeS2, PbSnS3, SnS and Sb2S3 or natural uraninite were used as a starting charge. Depending on the chemical composition of the sulfide/selenide charge, galena, unnamed SnGeS3 phase, herzenbergite, berndite, ottenmannite, stibnite and unnamed SnSb2S4 and Sn2Sb3S6 phases were identified in sublimates, together with cotunnite and an unnamed (NH4)2SnCl6 phase. When natural uraninite in a mixture with sal ammoniac was used as a charge, the reaction product comprised abundant cotunnite and minor challacolloite due to volatilization of radiogenic lead. When sulfur was introduced to the charge with uraninite and sal ammoniac, galena was found in reaction products. The results of our experiments revealed that if sulfide or selenide phases and NH4Cl are placed in a thermal gradient, it is possible to accelerate their mobility through a process of hydrogen chloride vapor transport. Within the transport process, new solid products are either isochemical or non-isochemical. The isochemical composition of resulting phases with charge probably represents simple sublimation of the original solid phase in form of self-vapor. The non-isochemical phases are probably formed due to combination of sublimation and condensation of various gas components including gaseous HCl. The valency change of metals (e.g. Sn2+ to Sn3+) in several reaction products indicates redox reactions in the gas mixture or during the solidification of resulting products. The role of ammoniac is not clear; however, formation of unnamed (NH4)2SnCl6 compound identified in one of our experiment, indicate possible formation of ammonium complexes. In contrast to experiments where sulfides or selenides were used as a part of charge, mobility of uraninite was not proved under experimental conditions employed. It is consistent with an

  7. Fundamentals of electronic image processing

    CERN Document Server

    Weeks, Arthur R


    This book is directed to practicing engineers and scientists who need to understand the fundamentals of image processing theory and algorithms to perform their technical tasks. It is intended to fill the gap between existing high-level texts dedicated to specialists in the field and the need for a more practical, fundamental text on image processing. A variety of example images are used to enhance reader understanding of how particular image processing algorithms work.

  8. Eye Redness Image Processing Techniques (United States)

    Adnan, M. R. H. Mohd; Zain, Azlan Mohd; Haron, Habibollah; Alwee, Razana; Zulfaezal Che Azemin, Mohd; Osman Ibrahim, Ashraf


    The use of photographs for the assessment of ocular conditions has been suggested to further standardize clinical procedures. The selection of the photographs to be used as scale reference images was subjective. Numerous methods have been proposed to assign eye redness scores by computational methods. Image analysis techniques have been investigated over the last 20 years in an attempt to forgo subjective grading scales. Image segmentation is one of the most important and challenging problems in image processing. This paper briefly outlines the comprehensive of image processing and the implementation of image segmentation in eye redness.

  9. Cooperative processes in image segmentation (United States)

    Davis, L. S.


    Research into the role of cooperative, or relaxation, processes in image segmentation is surveyed. Cooperative processes can be employed at several levels of the segmentation process as a preprocessing enhancement step, during supervised or unsupervised pixel classification and, finally, for the interpretation of image segments based on segment properties and relations.

  10. Industrial Applications of Image Processing (United States)

    Ciora, Radu Adrian; Simion, Carmen Mihaela


    The recent advances in sensors quality and processing power provide us with excellent tools for designing more complex image processing and pattern recognition tasks. In this paper we review the existing applications of image processing and pattern recognition in industrial engineering. First we define the role of vision in an industrial. Then a dissemination of some image processing techniques, feature extraction, object recognition and industrial robotic guidance is presented. Moreover, examples of implementations of such techniques in industry are presented. Such implementations include automated visual inspection, process control, part identification, robots control. Finally, we present some conclusions regarding the investigated topics and directions for future investigation

  11. [Imaging center - optimization of the imaging process]. (United States)

    Busch, H-P


    Hospitals around the world are under increasing pressure to optimize the economic efficiency of treatment processes. Imaging is responsible for a great part of the success but also of the costs of treatment. In routine work an excessive supply of imaging methods leads to an "as well as" strategy up to the limit of the capacity without critical reflection. Exams that have no predictable influence on the clinical outcome are an unjustified burden for the patient. They are useless and threaten the financial situation and existence of the hospital. In recent years the focus of process optimization was exclusively on the quality and efficiency of performed single examinations. In the future critical discussion of the effectiveness of single exams in relation to the clinical outcome will be more important. Unnecessary exams can be avoided, only if in addition to the optimization of single exams (efficiency) there is an optimization strategy for the total imaging process (efficiency and effectiveness). This requires a new definition of processes (Imaging Pathway), new structures for organization (Imaging Center) and a new kind of thinking on the part of the medical staff. Motivation has to be changed from gratification of performed exams to gratification of process quality (medical quality, service quality, economics), including the avoidance of additional (unnecessary) exams. © Georg Thieme Verlag KG Stuttgart · New York.

  12. Statistical Image Processing. (United States)


    spectral analysist texture image analysis and classification, __ image software package, automatic spatial clustering.ITWA domenit hi ba apa for...ICOLOR(256),IBW(256) 1502 FORMATO (30( CNO(N): fF12.1)) 1503 FORMAT(o *FMINo DMRGE:0f2E20.8) 1504 FORMAT(/o IMRGE:or15) 1505 FOR14ATV FIRST SUBIMAGE:v...1506 FORMATO ’ JOIN CLUSTER NL:0) 1507 FORMAT( NEW CLUSTER:O) 1508 FORMAT( LLBS.GE.600) 1532 FORMAT(15XoTHETA ,7X, SIGMA-SQUAREr3Xe MERGING-DISTANCE

  13. Building country image process

    Directory of Open Access Journals (Sweden)

    Zubović Jovan


    Full Text Available The same branding principles are used for countries as they are used for the products, only the methods are different. Countries are competing among themselves in tourism, foreign investments and exports. Country turnover is at the level that the country's reputation is. The countries that begin as unknown or with a bad image will have limits in operations or they will be marginalized. As a result they will be at the bottom of the international influence scale. On the other hand, countries with a good image, like Germany (despite two world wars will have their products covered with a special "aura".

  14. Image Processing and Geographic Information (United States)

    McLeod, Ronald G.; Daily, Julie; Kiss, Kenneth


    A Geographic Information System, which is a product of System Development Corporation's Image Processing System and a commercially available Data Base Management System, is described. The architecture of the system allows raster (image) data type, graphics data type, and tabular data type input and provides for the convenient analysis and display of spatial information. A variety of functions are supported through the Geographic Information System including ingestion of foreign data formats, image polygon encoding, image overlay, image tabulation, costatistical modelling of image and tabular information, and tabular to image conversion. The report generator in the DBMS is utilized to prepare quantitative tabular output extracted from spatially referenced images. An application of the Geographic Information System to a variety of data sources and types is highlighted. The application utilizes sensor image data, graphically encoded map information available from government sources, and statistical tables.

  15. SWNT Imaging Using Multispectral Image Processing (United States)

    Blades, Michael; Pirbhai, Massooma; Rotkin, Slava V.


    A flexible optical system was developed to image carbon single-wall nanotube (SWNT) photoluminescence using the multispectral capabilities of a typical CCD camcorder. The built in Bayer filter of the CCD camera was utilized, using OpenCV C++ libraries for image processing, to decompose the image generated in a high magnification epifluorescence microscope setup into three pseudo-color channels. By carefully calibrating the filter beforehand, it was possible to extract spectral data from these channels, and effectively isolate the SWNT signals from the background.


    Directory of Open Access Journals (Sweden)

    Preuss Ryszard


    Full Text Available This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft . At present, image data obtained by various registration systems (metric and non - metric cameras placed on airplanes , satellites , or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images . For fast images georeferencing automatic image matching algorithms are currently applied . They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage . Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object ( area. In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic , DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules . I mage processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters . The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system.

  17. Image processing for optical mapping. (United States)

    Ravindran, Prabu; Gupta, Aditya


    Optical Mapping is an established single-molecule, whole-genome analysis system, which has been used to gain a comprehensive understanding of genomic structure and to study structural variation of complex genomes. A critical component of Optical Mapping system is the image processing module, which extracts single molecule restriction maps from image datasets of immobilized, restriction digested and fluorescently stained large DNA molecules. In this review, we describe robust and efficient image processing techniques to process these massive datasets and extract accurate restriction maps in the presence of noise, ambiguity and confounding artifacts. We also highlight a few applications of the Optical Mapping system.

  18. Biomedical signal and image processing

    CERN Document Server

    Najarian, Kayvan


    INTRODUCTION TO DIGITAL SIGNAL AND IMAGE PROCESSINGSignals and Biomedical Signal ProcessingIntroduction and OverviewWhat is a ""Signal""?Analog, Discrete, and Digital SignalsProcessing and Transformation of SignalsSignal Processing for Feature ExtractionSome Characteristics of Digital ImagesSummaryProblemsFourier TransformIntroduction and OverviewOne-Dimensional Continuous Fourier TransformSampling and NYQUIST RateOne-Dimensional Discrete Fourier TransformTwo-Dimensional Discrete Fourier TransformFilter DesignSummaryProblemsImage Filtering, Enhancement, and RestorationIntroduction and Overview

  19. Study of the desphosphatization process. The impact of nitrilotriacetic acid trisodium salt (NTA). Estudio del proceso de defosfatacion. Impacto de la sal tisodica del acido nitrilotriacetico SNTA

    Energy Technology Data Exchange (ETDEWEB)

    Peisajovich, A.; El Falaki, K.; Martin, G.


    In this paper we examined the effects of NTA on the removal of phosphorus process. The biological phosphorus removal process has been studied in the batch tests. This process indicated that there is a perturbation level (DD) of the phosphorus removal which is situated at 40 mg/g MVLSS. A dynamic lab-scale study showed a reduction of the nitrogen and phosphorus removal efficiency, ten days after the introduction of NTA in the influent water in concentration lower than DD. The precipitation of phosphate from wastewater has been examined using the Jar-Test method. The resultants of phosphate precipitation employed iron (III), aluminium (III) or lime did not reveal any differences in the efficiency between the coagulants. The present of NTA did not show a reduction of efficiency in the removal phosphorus process. (Author) 32 refs.

  20. Fuzzy image processing in sun sensor (United States)

    Mobasser, S.; Liebe, C. C.; Howard, A.


    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  1. Differential morphology and image processing. (United States)

    Maragos, P


    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.

  2. Computational Intelligence in Image Processing

    CERN Document Server

    Siarry, Patrick


    Computational intelligence based techniques have firmly established themselves as viable, alternate, mathematical tools for more than a decade. They have been extensively employed in many systems and application domains, among these signal processing, automatic control, industrial and consumer electronics, robotics, finance, manufacturing systems, electric power systems, and power electronics. Image processing is also an extremely potent area which has attracted the atten­tion of many researchers who are interested in the development of new computational intelligence-based techniques and their suitable applications, in both research prob­lems and in real-world problems. Part I of the book discusses several image preprocessing algorithms; Part II broadly covers image compression algorithms; Part III demonstrates how computational intelligence-based techniques can be effectively utilized for image analysis purposes; and Part IV shows how pattern recognition, classification and clustering-based techniques can ...

  3. Image processing in medical ultrasound

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian

    This Ph.D project addresses image processing in medical ultrasound and seeks to achieve two major scientific goals: First to develop an understanding of the most significant factors influencing image quality in medical ultrasound, and secondly to use this knowledge to develop image processing...... methods for enhancing the diagnostic value of medical ultrasound. The project is an industrial Ph.D project co-sponsored by BK Medical ApS., with the commercial goal to improve the image quality of BK Medicals scanners. Currently BK Medical employ a simple conventional delay-and-sum beamformer to generate......-time data acquisition system. The system were implemented using the commercial available 2202 ProFocus BK Medical ultrasound scanner equipped with a research interface and a standard PC. The main feature of the system is the possibility to acquire several seconds of interleaved data, switching between...

  4. Digital processing of radiographic images (United States)

    Bond, A. D.; Ramapriyan, H. K.


    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  5. Image processing of galaxy photographs (United States)

    Arp, H.; Lorre, J.


    New computer techniques for analyzing and processing photographic images of galaxies are presented, with interesting scientific findings gleaned from the processed photographic data. Discovery and enhancement of very faint and low-contrast nebulous features, improved resolution of near-limit detail in nebulous and stellar images, and relative colors of a group of nebulosities in the field are attained by the methods. Digital algorithms, nonlinear pattern-recognition filters, linear convolution filters, plate averaging and contrast enhancement techniques, and an atmospheric deconvolution technique are described. New detail is revealed in images of NGC 7331, Stephan's Quintet, Seyfert's Sextet, and the jet in M87, via processes of addition of plates, star removal, contrast enhancement, standard deviation filtering, and computer ratioing to bring out qualitative color differences.

  6. CMOS imagers from phototransduction to image processing

    CERN Document Server

    Etienne-Cummings, Ralph


    The idea of writing a book on CMOS imaging has been brewing for several years. It was placed on a fast track after we agreed to organize a tutorial on CMOS sensors for the 2004 IEEE International Symposium on Circuits and Systems (ISCAS 2004). This tutorial defined the structure of the book, but as first time authors/editors, we had a lot to learn about the logistics of putting together information from multiple sources. Needless to say, it was a long road between the tutorial and the book, and it took more than a few months to complete. We hope that you will find our journey worthwhile and the collated information useful. The laboratories of the authors are located at many universities distributed around the world. Their unifying theme, however, is the advancement of knowledge for the development of systems for CMOS imaging and image processing. We hope that this book will highlight the ideas that have been pioneered by the authors, while providing a roadmap for new practitioners in this field to exploit exc...

  7. La sal en el queso: diversas interacciones.


    Juan Sabastián Ramírez-Navas; Jessica Aguirre-Londoño; Víctor Alexander Aristizabal-Ferreira; Sandra Castro-Narváez


    El objetivo de este trabajo fue analizar el efecto de la sal sobre algunas propiedades físicas del queso, su interacción con los componentes del queso, y el efecto del contenido de sodio sobre la salud de los consumidores. La sal es un ingrediente importante, ya que determina en gran parte la calidad del producto y la aceptación del consumidor. El salado del queso tiene in uencia en la calidad debido a sus efectos sobre la composición, el crecimiento microbiano y la actividad enzimática. Ejer...

  8. Open Geospatial Analytics with PySAL

    Directory of Open Access Journals (Sweden)

    Sergio J. Rey


    Full Text Available This article reviews the range of delivery platforms that have been developed for the PySAL open source Python library for spatial analysis. This includes traditional desktop software (with a graphical user interface, command line or embedded in a computational notebook, open spatial analytics middleware, and web, cloud and distributed open geospatial analytics for decision support. A common thread throughout the discussion is the emphasis on openness, interoperability, and provenance management in a scientific workflow. The code base of the PySAL library provides the common computing framework underlying all delivery mechanisms.

  9. Multimedia image and video processing

    CERN Document Server

    Guan, Ling


    As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w

  10. Linear Algebra and Image Processing (United States)

    Allali, Mohamed


    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  11. La sal en el queso: diversas interacciones.

    Directory of Open Access Journals (Sweden)

    Juan Sabastián Ramírez-Navas


    Full Text Available El objetivo de este trabajo fue analizar el efecto de la sal sobre algunas propiedades físicas del queso, su interacción con los componentes del queso, y el efecto del contenido de sodio sobre la salud de los consumidores. La sal es un ingrediente importante, ya que determina en gran parte la calidad del producto y la aceptación del consumidor. El salado del queso tiene in uencia en la calidad debido a sus efectos sobre la composición, el crecimiento microbiano y la actividad enzimática. Ejerce una in uencia signi cativa sobre la reología y textura, así como en la maduración, principalmente a través de sus efectos sobre la actividad del agua. Los niveles de sal en queso van desde aproximadamente 0,6% p/p hasta aproximadamente 7% p/p. Debido a que el consumo de queso está aumentando en todo el mundo, se debe dar importancia a la reducción de la sal sin afectar su consumo. Entre las estrategias que se han planteado con tal n está la sustitución parcial de la sal por otros compuestos. Pero el inconveniente de la sustitución del NaCl es su efecto sobre las propiedades sensoriales, la composición química, la proteólisis y la textura del queso. Otra interesante alternativa para el reemplazo del NaCl, es el uso de la tecnología de membranas para obtener permeado rico en sales provenientes del lactosuero; la adición de estas sales en la elaboración quesera, produce quesos bajos en sodio, con buena textura.

  12. Biomedical signal and image processing. (United States)

    Cerutti, Sergio; Baselli, Giuseppe; Bianchi, Anna; Caiani, Enrico; Contini, Davide; Cubeddu, Rinaldo; Dercole, Fabio; Rienzo, Luca; Liberati, Diego; Mainardi, Luca; Ravazzani, Paolo; Rinaldi, Sergio; Signorini, Maria; Torricelli, Alessandro


    Generally, physiological modeling and biomedical signal processing constitute two important paradigms of biomedical engineering (BME): their fundamental concepts are taught starting from undergraduate studies and are more completely dealt with in the last years of graduate curricula, as well as in Ph.D. courses. Traditionally, these two cultural aspects were separated, with the first one more oriented to physiological issues and how to model them and the second one more dedicated to the development of processing tools or algorithms to enhance useful information from clinical data. A practical consequence was that those who did models did not do signal processing and vice versa. However, in recent years,the need for closer integration between signal processing and modeling of the relevant biological systems emerged very clearly [1], [2]. This is not only true for training purposes(i.e., to properly prepare the new professional members of BME) but also for the development of newly conceived research projects in which the integration between biomedical signal and image processing (BSIP) and modeling plays a crucial role. Just to give simple examples, topics such as brain–computer machine or interfaces,neuroengineering, nonlinear dynamical analysis of the cardiovascular (CV) system,integration of sensory-motor characteristics aimed at the building of advanced prostheses and rehabilitation tools, and wearable devices for vital sign monitoring and others do require an intelligent fusion of modeling and signal processing competences that are certainly peculiar of our discipline of BME.

  13. Fast processing of foreign fiber images by image blocking

    Directory of Open Access Journals (Sweden)

    Yutao Wu


    Full Text Available In the textile industry, it is always the case that cotton products are constitutive of many types of foreign fibers which affect the overall quality of cotton products. As the foundation of the foreign fiber automated inspection, image process exerts a critical impact on the process of foreign fiber identification. This paper presents a new approach for the fast processing of foreign fiber images. This approach includes five main steps, image block, image pre-decision, image background extraction, image enhancement and segmentation, and image connection. At first, the captured color images were transformed into gray-scale images; followed by the inversion of gray-scale of the transformed images ; then the whole image was divided into several blocks. Thereafter, the subsequent step is to judge which image block contains the target foreign fiber image through image pre-decision. Then we segment the image block via OSTU which possibly contains target images after background eradication and image strengthening. Finally, we connect those relevant segmented image blocks to get an intact and clear foreign fiber target image. The experimental result shows that this method of segmentation has the advantage of accuracy and speed over the other segmentation methods. On the other hand, this method also connects the target image that produce fractures therefore getting an intact and clear foreign fiber target image.

  14. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M


    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  15. Statistical image processing and multidimensional modeling

    CERN Document Server

    Fieguth, Paul


    Images are all around us! The proliferation of low-cost, high-quality imaging devices has led to an explosion in acquired images. When these images are acquired from a microscope, telescope, satellite, or medical imaging device, there is a statistical image processing task: the inference of something - an artery, a road, a DNA marker, an oil spill - from imagery, possibly noisy, blurry, or incomplete. A great many textbooks have been written on image processing. However this book does not so much focus on images, per se, but rather on spatial data sets, with one or more measurements taken over

  16. Modeling and Analysis of Asynchronous Systems Using SAL and Hybrid SAL (United States)

    Tiwari, Ashish; Dutertre, Bruno


    We present formal models and results of formal analysis of two different asynchronous systems. We first examine a mid-value select module that merges the signals coming from three different sensors that are each asynchronously sampling the same input signal. We then consider the phase locking protocol proposed by Daly, Hopkins, and McKenna. This protocol is designed to keep a set of non-faulty (asynchronous) clocks phase locked even in the presence of Byzantine-faulty clocks on the network. All models and verifications have been developed using the SAL model checking tools and the Hybrid SAL abstractor.

  17. Eliminating "Hotspots" in Digital Image Processing (United States)

    Salomon, P. M.


    Signals from defective picture elements rejected. Image processing program for use with charge-coupled device (CCD) or other mosaic imager augmented with algorithm that compensates for common type of electronic defect. Algorithm prevents false interpretation of "hotspots". Used for robotics, image enhancement, image analysis and digital television.

  18. Tensors in image processing and computer vision

    CERN Document Server

    De Luis García, Rodrigo; Tao, Dacheng; Li, Xuelong


    Tensor signal processing is an emerging field with important applications to computer vision and image processing. This book presents the developments in this branch of signal processing, offering research and discussions by experts in the area. It is suitable for advanced students working in the area of computer vision and image processing.

  19. Introduction to image processing and analysis

    CERN Document Server

    Russ, John C


    ADJUSTING PIXEL VALUES Optimizing Contrast Color Correction Correcting Nonuniform Illumination Geometric Transformations Image Arithmetic NEIGHBORHOOD OPERATIONS Convolution Other Neighborhood Operations Statistical Operations IMAGE PROCESSING IN THE FOURIER DOMAIN The Fourier Transform Removing Periodic Noise Convolution and Correlation Deconvolution Other Transform Domains Compression BINARY IMAGES Thresholding Morphological Processing Other Morphological Operations Boolean Operations MEASUREMENTS Global Measurements Feature Measurements Classification APPENDIX: SOFTWARE REFERENCES AND LITERATURE INDEX.

  20. Applications Of Image Processing In Criminalistics (United States)

    Krile, Thomas F.; Walkup, John F.; Barsallo, Adonis; Olimb, Hal; Tarng, Jaw-Horng


    A review of some basic image processing techniques for enhancement and restoration of images is given. Both digital and optical approaches are discussed. Fingerprint images are used as examples to illustrate the various processing techniques and their potential applications in criminalistics.

  1. Fuzzy image processing and applications with Matlab

    CERN Document Server

    Chaira, Tamalika


    In contrast to classical image analysis methods that employ ""crisp"" mathematics, fuzzy set techniques provide an elegant foundation and a set of rich methodologies for diverse image-processing tasks. However, a solid understanding of fuzzy processing requires a firm grasp of essential principles and background knowledge.Fuzzy Image Processing and Applications with MATLAB® presents the integral science and essential mathematics behind this exciting and dynamic branch of image processing, which is becoming increasingly important to applications in areas such as remote sensing, medical imaging,

  2. Optoelectronic imaging of speckle using image processing method (United States)

    Wang, Jinjiang; Wang, Pengfei


    A detailed image processing of laser speckle interferometry is proposed as an example for the course of postgraduate student. Several image processing methods were used together for dealing with optoelectronic imaging system, such as the partial differential equations (PDEs) are used to reduce the effect of noise, the thresholding segmentation also based on heat equation with PDEs, the central line is extracted based on image skeleton, and the branch is removed automatically, the phase level is calculated by spline interpolation method, and the fringe phase can be unwrapped. Finally, the imaging processing method was used to automatically measure the bubble in rubber with negative pressure which could be used in the tire detection.

  3. Combining image-processing and image compression schemes (United States)

    Greenspan, H.; Lee, M.-C.


    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  4. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  5. Digital image processing techniques in archaeology

    Digital Repository Service at National Institute of Oceanography (India)

    Santanam, K.; Vaithiyanathan, R.; Tripati, S.

    Digital image processing involves the manipulation and interpretation of digital images with the aid of a computer. This form of remote sensing actually began in the 1960's with a limited number of researchers analysing multispectral scanner data...

  6. Programmable remapper for image processing (United States)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)


    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  7. Amplitude image processing by diffractive optics. (United States)

    Cagigal, Manuel P; Valle, Pedro J; Canales, V F


    In contrast to the standard digital image processing, which operates over the detected image intensity, we propose to perform amplitude image processing. Amplitude processing, like low pass or high pass filtering, is carried out using diffractive optics elements (DOE) since it allows to operate over the field complex amplitude before it has been detected. We show the procedure for designing the DOE that corresponds to each operation. Furthermore, we accomplish an analysis of amplitude image processing performances. In particular, a DOE Laplacian filter is applied to simulated astronomical images for detecting two stars one Airy ring apart. We also check by numerical simulations that the use of a Laplacian amplitude filter produces less noisy images than the standard digital image processing.

  8. Image processing in diabetic related causes

    CERN Document Server

    Kumar, Amit


    This book is a collection of all the experimental results and analysis carried out on medical images of diabetic related causes. The experimental investigations have been carried out on images starting from very basic image processing techniques such as image enhancement to sophisticated image segmentation methods. This book is intended to create an awareness on diabetes and its related causes and image processing methods used to detect and forecast in a very simple way. This book is useful to researchers, Engineers, Medical Doctors and Bioinformatics researchers.

  9. Digital signal processing techniques and applications in radar image processing

    CERN Document Server

    Wang, Bu-Chin


    A self-contained approach to DSP techniques and applications in radar imagingThe processing of radar images, in general, consists of three major fields: Digital Signal Processing (DSP); antenna and radar operation; and algorithms used to process the radar images. This book brings together material from these different areas to allow readers to gain a thorough understanding of how radar images are processed.The book is divided into three main parts and covers:* DSP principles and signal characteristics in both analog and digital domains, advanced signal sampling, and

  10. Semi-automated Image Processing for Preclinical Bioluminescent Imaging. (United States)

    Slavine, Nikolai V; McColl, Roderick W

    Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment.

  11. Image Processing and Features Extraction of Fingerprint Images ...

    African Journals Online (AJOL)

    Several fingerprint matching algorithms have been developed for minutiae or template matching of fingerprint templates. The efficiency of these fingerprint matching algorithms depends on the success of the image processing and features extraction steps employed. Fingerprint image processing and analysis is hence an ...

  12. Curs CEAC de balls de saló


    Morillo Peres, Xavier


    Anunci d'un curs de balls de saló a distància. Treball realitzat mitjançant stop motion. Anuncio de un curso de bailes de salón a distancia. Trabajo realizado mediante stop motion. Bachelor thesis for the Multimedia program.

  13. An overview of medical image processing methods

    African Journals Online (AJOL)



    Jun 14, 2010 ... images through computer simulations has already in- creased the interests of many researchers. 3D image rendering usually refers to the analysis of the ..... Digital Image Processing. Reading,. MA: Addison-Wesley Publishing Company. Gose E, Johnsonbaugh R, Jost S (1996). Pattern Recognition and.

  14. Applied medical image processing a basic course

    CERN Document Server

    Birkfellner, Wolfgang


    A widely used, classroom-tested text, Applied Medical Image Processing: A Basic Course delivers an ideal introduction to image processing in medicine, emphasizing the clinical relevance and special requirements of the field. Avoiding excessive mathematical formalisms, the book presents key principles by implementing algorithms from scratch and using simple MATLAB®/Octave scripts with image data and illustrations on an accompanying CD-ROM or companion website. Organized as a complete textbook, it provides an overview of the physics of medical image processing and discusses image formats and data storage, intensity transforms, filtering of images and applications of the Fourier transform, three-dimensional spatial transforms, volume rendering, image registration, and tomographic reconstruction.

  15. Image processing techniques for remote sensing data

    Digital Repository Service at National Institute of Oceanography (India)

    RameshKumar, M.R.

    interpretation and for processing of scene data for autonomous machine perception. The technique of digital image processing are used for' automatic character/pattern recognition, industrial robots for product assembly and inspection, military recognizance...

  16. Non-linear Post Processing Image Enhancement (United States)

    Hunt, Shawn; Lopez, Alex; Torres, Angel


    A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,

  17. Quantitative image processing in fluid mechanics (United States)

    Hesselink, Lambertus; Helman, James; Ning, Paul


    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  18. Water surface capturing by image processing (United States)

    An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...

  19. Automatic processing, analysis, and recognition of images (United States)

    Abrukov, Victor S.; Smirnov, Evgeniy V.; Ivanov, Dmitriy G.


    New approaches and computer codes (A&CC) for automatic processing, analysis and recognition of images are offered. The A&CC are based on presentation of object image as a collection of pixels of various colours and consecutive automatic painting of distinguished itself parts of the image. The A&CC have technical objectives centred on such direction as: 1) image processing, 2) image feature extraction, 3) image analysis and some others in any consistency and combination. The A&CC allows to obtain various geometrical and statistical parameters of object image and its parts. Additional possibilities of the A&CC usage deal with a usage of artificial neural networks technologies. We believe that A&CC can be used at creation of the systems of testing and control in a various field of industry and military applications (airborne imaging systems, tracking of moving objects), in medical diagnostics, at creation of new software for CCD, at industrial vision and creation of decision-making system, etc. The opportunities of the A&CC are tested at image analysis of model fires and plumes of the sprayed fluid, ensembles of particles, at a decoding of interferometric images, for digitization of paper diagrams of electrical signals, for recognition of the text, for elimination of a noise of the images, for filtration of the image, for analysis of the astronomical images and air photography, at detection of objects.

  20. Image processing and communications challenges 5

    CERN Document Server


    This textbook collects a series of research papers in the area of Image Processing and Communications which not only introduce a summary of current technology but also give an outlook of potential feature problems in this area. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in image processing and communications. The book is divided into two parts. Part I deals with image processing. A comprehensive survey of different methods  of image processing, computer vision  is also presented. Part II deals with the telecommunications networks and computer networks. Applications in these areas are considered. In conclusion, the edited book comprises papers on diverse aspects of image processing  and communications systems. There are theoretical aspects as well as application papers.

  1. Digital radiography image quality: image processing and display. (United States)

    Krupinski, Elizabeth A; Williams, Mark B; Andriole, Katherine; Strauss, Keith J; Applegate, Kimberly; Wyatt, Margaret; Bjork, Sandra; Seibert, J Anthony


    This article on digital radiography image processing and display is the second of two articles written as part of an intersociety effort to establish image quality standards for digital and computed radiography. The topic of the other paper is digital radiography image acquisition. The articles were developed collaboratively by the ACR, the American Association of Physicists in Medicine, and the Society for Imaging Informatics in Medicine. Increasingly, medical imaging and patient information are being managed using digital data during acquisition, transmission, storage, display, interpretation, and consultation. The management of data during each of these operations may have an impact on the quality of patient care. These articles describe what is known to improve image quality for digital and computed radiography and to make recommendations on optimal acquisition, processing, and display. The practice of digital radiography is a rapidly evolving technology that will require timely revision of any guidelines and standards.

  2. Image processing for cameras with fiber bundle image relay. (United States)

    Olivas, Stephen J; Arianpour, Ashkan; Stamenov, Igor; Morrison, Rick; Stack, Ron A; Johnson, Adam R; Agurok, Ilya P; Ford, Joseph E


    Some high-performance imaging systems generate a curved focal surface and so are incompatible with focal plane arrays fabricated by conventional silicon processing. One example is a monocentric lens, which forms a wide field-of-view high-resolution spherical image with a radius equal to the focal length. Optical fiber bundles have been used to couple between this focal surface and planar image sensors. However, such fiber-coupled imaging systems suffer from artifacts due to image sampling and incoherent light transfer by the fiber bundle as well as resampling by the focal plane, resulting in a fixed obscuration pattern. Here, we describe digital image processing techniques to improve image quality in a compact 126° field-of-view, 30 megapixel panoramic imager, where a 12 mm focal length F/1.35 lens made of concentric glass surfaces forms a spherical image surface, which is fiber-coupled to six discrete CMOS focal planes. We characterize the locally space-variant system impulse response at various stages: monocentric lens image formation onto the 2.5 μm pitch fiber bundle, image transfer by the fiber bundle, and sensing by a 1.75 μm pitch backside illuminated color focal plane. We demonstrate methods to mitigate moiré artifacts and local obscuration, correct for sphere to plane mapping distortion and vignetting, and stitch together the image data from discrete sensors into a single panorama. We compare processed images from the prototype to those taken with a 10× larger commercial camera with comparable field-of-view and light collection.

  3. Cellular automata in image processing and geometry

    CERN Document Server

    Adamatzky, Andrew; Sun, Xianfang


    The book presents findings, views and ideas on what exact problems of image processing, pattern recognition and generation can be efficiently solved by cellular automata architectures. This volume provides a convenient collection in this area, in which publications are otherwise widely scattered throughout the literature. The topics covered include image compression and resizing; skeletonization, erosion and dilation; convex hull computation, edge detection and segmentation; forgery detection and content based retrieval; and pattern generation. The book advances the theory of image processing, pattern recognition and generation as well as the design of efficient algorithms and hardware for parallel image processing and analysis. It is aimed at computer scientists, software programmers, electronic engineers, mathematicians and physicists, and at everyone who studies or develops cellular automaton algorithms and tools for image processing and analysis, or develops novel architectures and implementations of mass...

  4. On some applications of diffusion processes for image processing

    Energy Technology Data Exchange (ETDEWEB)

    Morfu, S., E-mail: smorfu@u-bourgogne.f [Laboratoire d' Electronique, Informatique et Image (LE2i), UMR Cnrs 5158, Aile des Sciences de l' Ingenieur, BP 47870, 21078 Dijon Cedex (France)


    We propose a new algorithm inspired by the properties of diffusion processes for image filtering. We show that purely nonlinear diffusion processes ruled by Fisher equation allows contrast enhancement and noise filtering, but involves a blurry image. By contrast, anisotropic diffusion, described by Perona and Malik algorithm, allows noise filtering and preserves the edges. We show that combining the properties of anisotropic diffusion with those of nonlinear diffusion provides a better processing tool which enables noise filtering, contrast enhancement and edge preserving.

  5. ARTIP: Automated Radio Telescope Image Processing Pipeline (United States)

    Sharma, Ravi; Gyanchandani, Dolly; Kulkarni, Sarang; Gupta, Neeraj; Pathak, Vineet; Pande, Arti; Joshi, Unmesh


    The Automated Radio Telescope Image Processing Pipeline (ARTIP) automates the entire process of flagging, calibrating, and imaging for radio-interferometric data. ARTIP starts with raw data, i.e. a measurement set and goes through multiple stages, such as flux calibration, bandpass calibration, phase calibration, and imaging to generate continuum and spectral line images. Each stage can also be run independently. The pipeline provides continuous feedback to the user through various messages, charts and logs. It is written using standard python libraries and the CASA package. The pipeline can deal with datasets with multiple spectral windows and also multiple target sources which may have arbitrary combinations of flux/bandpass/phase calibrators.

  6. Applications of Digital Image Processing 11 (United States)

    Cho, Y. -C.


    A new technique, digital image velocimetry, is proposed for the measurement of instantaneous velocity fields of time dependent flows. A time sequence of single-exposure images of seed particles are captured with a high-speed camera, and a finite number of the single-exposure images are sampled within a prescribed period in time. The sampled images are then digitized on an image processor, enhanced, and superimposed to construct an image which is equivalent to a multiple exposure image used in both laser speckle velocimetry and particle image velocimetry. The superimposed image and a single-exposure Image are digitally Fourier transformed for extraction of information on the velocity field. A great enhancement of the dynamic range of the velocity measurement is accomplished through the new technique by manipulating the Fourier transform of both the single-exposure image and the superimposed image. Also the direction of the velocity vector is unequivocally determined. With the use of a high-speed video camera, the whole process from image acquisition to velocity determination can be carried out electronically; thus this technique can be developed into a real-time capability.

  7. Imaging process and VIP engagement

    Directory of Open Access Journals (Sweden)

    Starčević Slađana


    Full Text Available It's often quoted that celebrity endorsement advertising has been recognized as "an ubiquitous feature of the modern marketing". The researches have shown that this kind of engagement has been producing significantly more favorable reactions of consumers, that is, a higher level of an attention for the advertising messages, a better recall of the message and a brand name, more favorable evaluation and purchasing intentions of the brand, in regard to engagement of the non-celebrity endorsers. A positive influence on a firm's profitability and prices of stocks has also been shown. Therefore marketers leaded by the belief that celebrities represent the effective ambassadors in building of positive brand image or company image and influence an improvement of the competitive position, invest enormous amounts of money for signing the contracts with them. However, this strategy doesn't guarantee success in any case, because it's necessary to take into account many factors. This paper summarizes the results of previous researches in this field and also the recommendations for a more effective use of this kind of advertising.

  8. Crack Length Detection by Digital Image Processing

    DEFF Research Database (Denmark)

    Lyngbye, Janus; Brincker, Rune


    It is described how digital image processing is used for measuring the length of fatigue cracks. The system is installed in a Personal Computer equipped with image processing hardware and performs automated measuring on plane metal specimens used in fatigue testing. Normally one can not achieve...... a resolution better then that of the image processing equipment. To overcome this problem an extrapolation technique is used resulting in a better resolution. The system was tested on a specimen loaded with different loads. The error σa was less than 0.031 mm, which is of the same size as human measuring...

  9. Crack Detection by Digital Image Processing

    DEFF Research Database (Denmark)

    Lyngbye, Janus; Brincker, Rune

    It is described how digital image processing is used for measuring the length of fatigue cracks. The system is installed in a Personal, Computer equipped with image processing hardware and performs automated measuring on plane metal specimens used in fatigue testing. Normally one can not achieve...... a resolution better than that of the image processing equipment. To overcome this problem an extrapolation technique is used resulting in a better resolution. The system was tested on a specimen loaded with different loads. The error σa was less than 0.031 mm, which is of the same size as human measuring...

  10. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R


    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  11. Lung Cancer Detection Using Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    Mokhled S. AL-TARAWNEH


    Full Text Available Recently, image processing techniques are widely used in several medical areas for image improvement in earlier detection and treatment stages, where the time factor is very important to discover the abnormality issues in target images, especially in various cancer tumours such as lung cancer, breast cancer, etc. Image quality and accuracy is the core factors of this research, image quality assessment as well as improvement are depending on the enhancement stage where low pre-processing techniques is used based on Gabor filter within Gaussian rules. Following the segmentation principles, an enhanced region of the object of interest that is used as a basic foundation of feature extraction is obtained. Relying on general features, a normality comparison is made. In this research, the main detected features for accurate images comparison are pixels percentage and mask-labelling.

  12. Signal and image processing in medical applications

    CERN Document Server

    Kumar, Amit; Rahim, B Abdul; Kumar, D Sravan


    This book highlights recent findings on and analyses conducted on signals and images in the area of medicine. The experimental investigations involve a variety of signals and images and their methodologies range from very basic to sophisticated methods. The book explains how signal and image processing methods can be used to detect and forecast abnormalities in an easy-to-follow manner, offering a valuable resource for researchers, engineers, physicians and bioinformatics researchers alike.

  13. The NPS Virtual Thermal Image processing model


    Kenter, Yucel.


    A new virtual thermal image-processing model that has been developed at the Naval Postgraduate School is introduced in this thesis. This visualization program is based on an earlier work, the Visibility MRTD model, which is focused on predicting the minimum resolvable temperature difference (MRTD). The MRTD is a standard performance measure for forward-looking infrared (FLIR) imaging systems. It takes into account thermal imaging system modeling concerns, such as modulation transfer functions...

  14. Growth of Azospirillum irakense KBC1 on the Aryl β-Glucoside Salicin Requires either salA or salB (United States)

    Faure, Denis; Desair, Jos; Keijers, Veerle; Bekri, My Ali; Proost, Paul; Henrissat, Bernard; Vanderleyden, Jos


    The rhizosphere nitrogen-fixing bacterium Azospirillum irakense KBC1 is able to grow on pectin and β-glucosides such as cellobiose, arbutin, and salicin. Two adjacent genes, salA and salB, conferring β-glucosidase activity to Escherichia coli, have been identified in a cosmid library of A. irakense DNA. The SalA and SalB enzymes preferentially hydrolyzed aryl β-glucosides. A Δ(salA-salB) A. irakense mutant was not able to grow on salicin but could still utilize arbutin, cellobiose, and glucose for growth. This mutant could be complemented by either salA or salB, suggesting functional redundancy of these genes in salicin utilization. In contrast to this functional homology, the SalA and SalB proteins, members of family 3 of the glycosyl hydrolases, show a low degree of amino acid similarity. Unlike SalA, the SalB protein exhibits an atypical truncated C-terminal region. We propose that SalA and SalB are representatives of the AB and AB′ subfamilies, respectively, in glycosyl hydrolase family 3. This is the first genetic implication of this β-glucosidase family in the utilization of β-glucosides for microbial growth. PMID:10321999

  15. Digital Image Processing in Private Industry. (United States)

    Moore, Connie


    Examines various types of private industry optical disk installations in terms of business requirements for digital image systems in five areas: records management; transaction processing; engineering/manufacturing; information distribution; and office automation. Approaches for implementing image systems are addressed as well as key success…

  16. Mapping spatial patterns with morphological image processing (United States)

    Peter Vogt; Kurt H. Riitters; Christine Estreguil; Jacek Kozak; Timothy G. Wade; James D. Wickham


    We use morphological image processing for classifying spatial patterns at the pixel level on binary land-cover maps. Land-cover pattern is classified as 'perforated,' 'edge,' 'patch,' and 'core' with higher spatial precision and thematic accuracy compared to a previous approach based on image convolution, while retaining the...

  17. Selections from 2017: Image Processing with AstroImageJ (United States)

    Kohler, Susanna


    Editors note:In these last two weeks of 2017, well be looking at a few selections that we havent yet discussed on AAS Nova from among the most-downloaded paperspublished in AAS journals this year. The usual posting schedule will resume in January.AstroImageJ: Image Processing and Photometric Extraction for Ultra-Precise Astronomical Light CurvesPublished January2017The AIJ image display. A wide range of astronomy specific image display options and image analysis tools are available from the menus, quick access icons, and interactive histogram. [Collins et al. 2017]Main takeaway:AstroImageJ is a new integrated software package presented in a publication led byKaren Collins(Vanderbilt University,Fisk University, andUniversity of Louisville). Itenables new users even at the level of undergraduate student, high school student, or amateur astronomer to quickly start processing, modeling, and plotting astronomical image data.Why its interesting:Science doesnt just happen the momenta telescope captures a picture of a distantobject. Instead, astronomical images must firstbe carefully processed to clean up thedata, and this data must then be systematically analyzed to learn about the objects within it. AstroImageJ as a GUI-driven, easily installed, public-domain tool is a uniquelyaccessible tool for thisprocessing and analysis, allowing even non-specialist users to explore and visualizeastronomical data.Some features ofAstroImageJ:(as reported by Astrobites)Image calibration:generate master flat, dark, and bias framesImage arithmetic:combineimages viasubtraction, addition, division, multiplication, etc.Stack editing:easily perform operations on a series of imagesImage stabilization and image alignment featuresPrecise coordinate converters:calculate Heliocentric and Barycentric Julian DatesWCS coordinates:determine precisely where atelescope was pointed for an image by PlateSolving using Astronomy.netMacro and plugin support:write your own macrosMulti-aperture photometry

  18. Checking Fits With Digital Image Processing (United States)

    Davis, R. M.; Geaslen, W. D.


    Computer-aided video inspection of mechanical and electrical connectors feasible. Report discusses work done on digital image processing for computer-aided interface verification (CAIV). Two kinds of components examined: mechanical mating flange and electrical plug.

  19. Imaging partons in exclusive scattering processes

    Energy Technology Data Exchange (ETDEWEB)

    Diehl, Markus


    The spatial distribution of partons in the proton can be probed in suitable exclusive scattering processes. I report on recent performance estimates for parton imaging at a proposed Electron-Ion Collider.

  20. Recent developments in digital image processing at the Image Processing Laboratory of JPL. (United States)

    O'Handley, D. A.


    Review of some of the computer-aided digital image processing techniques recently developed. Special attention is given to mapping and mosaicking techniques and to preliminary developments in range determination from stereo image pairs. The discussed image processing utilization areas include space, biomedical, and robotic applications.

  1. Study on Processing Method of Image Shadow

    Directory of Open Access Journals (Sweden)

    Wang Bo


    Full Text Available In order to effectively remove disturbance of shadow and enhance robustness of information processing of computer visual image, this paper makes study on inspection and removal of image shadow. It makes study the continual removal algorithm of shadow based on integration, the illumination surface and texture, it respectively introduces their work principles and realization method, it can effectively carrying processing for shadow by test.

  2. Image quality dependence on image processing software in ...

    African Journals Online (AJOL)

    Background. Image post-processing gives computed radiography (CR) a considerable advantage over film-screen systems. After digitisation of information from CR plates, data are routinely processed using manufacturer-specific software. Agfa CR readers use MUSICA software, and an upgrade with significantly different ...

  3. Contaminants survey of La Sal Vieja, Willacy County, Texas, 1989 (United States)

    US Fish and Wildlife Service, Department of the Interior — Organochlorine, trace element, and petroleum hydrocarbon contaminants were examined in sediments from two hypersaline lakes comprising the La Sal Vieja complex in...

  4. Early Skin Tumor Detection from Microscopic Images through Image Processing

    Directory of Open Access Journals (Sweden)



    Full Text Available The research is done to provide appropriate detection technique for skin tumor detection. The work is done by using the image processing toolbox of MATLAB. Skin tumors are unwanted skin growth with different causes and varying extent of malignant cells. It is a syndrome in which skin cells mislay the ability to divide and grow normally. Early detection of tumor is the most important factor affecting the endurance of a patient. Studying the pattern of the skin cells is the fundamental problem in medical image analysis. The study of skin tumor has been of great interest to the researchers. DIP (Digital Image Processing allows the use of much more complex algorithms for image processing, and hence, can offer both more sophisticated performance at simple task, and the implementation of methods which would be impossibly by analog means. It allows much wider range of algorithms to be applied to the input data and can avoid problems such as build up of noise and signal distortion during processing. The study shows that few works has been done on cellular scale for the images of skin. This research allows few checks for the early detection of skin tumor using microscopic images after testing and observing various algorithms. After analytical evaluation the result has been observed that the proposed checks are time efficient techniques and appropriate for the tumor detection. The algorithm applied provides promising results in lesser time with accuracy. The GUI (Graphical User Interface that is generated for the algorithm makes the system user friendly

  5. Challenges in 3DTV image processing (United States)

    Redert, André; Berretty, Robert-Paul; Varekamp, Chris; van Geest, Bart; Bruijns, Jan; Braspenning, Ralph; Wei, Qingqing


    Philips provides autostereoscopic three-dimensional display systems that will bring the next leap in visual experience, adding true depth to video systems. We identified three challenges specifically for 3D image processing: 1) bandwidth and complexity of 3D images, 2) conversion of 2D to 3D content, and 3) object-based image/depth processing. We discuss these challenges and our solutions via several examples. In conclusion, the solutions have enabled the market introduction of several professional 3D products, and progress is made rapidly towards consumer 3DTV.

  6. Rotation Covariant Image Processing for Biomedical Applications

    Directory of Open Access Journals (Sweden)

    Henrik Skibbe


    Full Text Available With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences.

  7. The Dark Energy Survey Image Processing Pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Morganson, E.; et al.


    The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a 5000 square degree survey of the southern sky in five optical bands (g,r,i,z,Y) to a depth of ~24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g,r,i,z) over 27 square degrees. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On a bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.

  8. Corner-point criterion for assessing nonlinear image processing imagers (United States)

    Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory


    Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to

  9. Brain's tumor image processing using shearlet transform (United States)

    Cadena, Luis; Espinosa, Nikolai; Cadena, Franklin; Korneeva, Anna; Kruglyakov, Alexey; Legalov, Alexander; Romanenko, Alexey; Zotin, Alexander


    Brain tumor detection is well known research area for medical and computer scientists. In last decades there has been much research done on tumor detection, segmentation, and classification. Medical imaging plays a central role in the diagnosis of brain tumors and nowadays uses methods non-invasive, high-resolution techniques, especially magnetic resonance imaging and computed tomography scans. Edge detection is a fundamental tool in image processing, particularly in the areas of feature detection and feature extraction, which aim at identifying points in a digital image at which the image has discontinuities. Shearlets is the most successful frameworks for the efficient representation of multidimensional data, capturing edges and other anisotropic features which frequently dominate multidimensional phenomena. The paper proposes an improved brain tumor detection method by automatically detecting tumor location in MR images, its features are extracted by new shearlet transform.

  10. Fundamental Concepts of Digital Image Processing (United States)

    Twogood, R. E.


    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  11. Fundamental concepts of digital image processing

    Energy Technology Data Exchange (ETDEWEB)

    Twogood, R.E.


    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  12. Traffic analysis and control using image processing (United States)

    Senthilkumar, K.; Ellappan, Vijayan; Arun, A. R.


    This paper shows the work on traffic analysis and control till date. It shows an approach to regulate traffic the use of image processing and MATLAB systems. This concept uses computational images that are to be compared with original images of the street taken in order to determine the traffic level percentage and set the timing for the traffic signal accordingly which are used to reduce the traffic stoppage on traffic lights. They concept proposes to solve real life scenarios in the streets, thus enriching the traffic lights by adding image receivers like HD cameras and image processors. The input is then imported into MATLAB to be used. as a method for calculating the traffic on roads. Their results would be computed in order to adjust the traffic light timings on a particular street, and also with respect to other similar proposals but with the added value of solving a real, big instance.

  13. Digital-image processing and image analysis of glacier ice (United States)

    Fitzpatrick, Joan J.


    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  14. Employing image processing techniques for cancer detection using microarray images. (United States)

    Dehghan Khalilabad, Nastaran; Hassanpour, Hamid


    Microarray technology is a powerful genomic tool for simultaneously studying and analyzing the behavior of thousands of genes. The analysis of images obtained from this technology plays a critical role in the detection and treatment of diseases. The aim of the current study is to develop an automated system for analyzing data from microarray images in order to detect cancerous cases. The proposed system consists of three main phases, namely image processing, data mining, and the detection of the disease. The image processing phase performs operations such as refining image rotation, gridding (locating genes) and extracting raw data from images the data mining includes normalizing the extracted data and selecting the more effective genes. Finally, via the extracted data, cancerous cell is recognized. To evaluate the performance of the proposed system, microarray database is employed which includes Breast cancer, Myeloid Leukemia and Lymphomas from the Stanford Microarray Database. The results indicate that the proposed system is able to identify the type of cancer from the data set with an accuracy of 95.45%, 94.11%, and 100%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. A brief review of digital image processing (United States)

    Billingsley, F. C.


    The review is presented with particular reference to Skylab S-192 and Landsat MSS imagery. Attention is given to rectification (calibration) processing with emphasis on geometric correction of image distortions. Image enhancement techniques (e.g., the use of high pass digital filters to eliminate gross shading to allow emphasis of the fine detail) are described along with data analysis and system considerations (software philosophy).

  16. PCB Fault Detection Using Image Processing (United States)

    Nayak, Jithendra P. R.; Anitha, K.; Parameshachari, B. D., Dr.; Banu, Reshma, Dr.; Rashmi, P.


    The importance of the Printed Circuit Board inspection process has been magnified by requirements of the modern manufacturing environment where delivery of 100% defect free PCBs is the expectation. To meet such expectations, identifying various defects and their types becomes the first step. In this PCB inspection system the inspection algorithm mainly focuses on the defect detection using the natural images. Many practical issues like tilt of the images, bad light conditions, height at which images are taken etc. are to be considered to ensure good quality of the image which can then be used for defect detection. Printed circuit board (PCB) fabrication is a multidisciplinary process, and etching is the most critical part in the PCB manufacturing process. The main objective of Etching process is to remove the exposed unwanted copper other than the required circuit pattern. In order to minimize scrap caused by the wrongly etched PCB panel, inspection has to be done in early stage. However, all of the inspections are done after the etching process where any defective PCB found is no longer useful and is simply thrown away. Since etching process costs 0% of the entire PCB fabrication, it is uneconomical to simply discard the defective PCBs. In this paper a method to identify the defects in natural PCB images and associated practical issues are addressed using Software tools and some of the major types of single layer PCB defects are Pattern Cut, Pin hole, Pattern Short, Nick etc., Therefore the defects should be identified before the etching process so that the PCB would be reprocessed. In the present approach expected to improve the efficiency of the system in detecting the defects even in low quality images

  17. Sal-Site: Integrating new and existing ambystomatid salamander research and informational resources

    Directory of Open Access Journals (Sweden)

    Weisrock David W


    Full Text Available Abstract Salamanders of the genus Ambystoma are a unique model organism system because they enable natural history and biomedical research in the laboratory or field. We developed Sal-Site to integrate new and existing ambystomatid salamander research resources in support of this model system. Sal-Site hosts six important resources: 1 Salamander Genome Project: an information-based web-site describing progress in genome resource development, 2 Ambystoma EST Database: a database of manually edited and analyzed contigs assembled from ESTs that were collected from A. tigrinum tigrinum and A. mexicanum, 3 Ambystoma Gene Collection: a database containing full-length protein-coding sequences, 4 Ambystoma Map and Marker Collection: an image and database resource that shows the location of mapped markers on linkage groups, provides information about markers, and provides integrating links to Ambystoma EST Database and Ambystoma Gene Collection databases, 5 Ambystoma Genetic Stock Center: a website and collection of databases that describe an NSF funded salamander rearing facility that generates and distributes biological materials to researchers and educators throughout the world, and 6 Ambystoma Research Coordination Network: a web-site detailing current research projects and activities involving an international group of researchers. Sal-Site is accessible at

  18. Mathematical foundations of image processing and analysis

    CERN Document Server

    Pinoli, Jean-Charles


    Mathematical Imaging is currently a rapidly growing field in applied mathematics, with an increasing need for theoretical mathematics. This book, the second of two volumes, emphasizes the role of mathematics as a rigorous basis for imaging sciences. It provides a comprehensive and convenient overview of the key mathematical concepts, notions, tools and frameworks involved in the various fields of gray-tone and binary image processing and analysis, by proposing a large, but coherent, set of symbols and notations, a complete list of subjects and a detailed bibliography. It establishes a bridg

  19. Iterative elimination algorithm for thermal image processing

    Directory of Open Access Journals (Sweden)

    A. H. Alkali


    Full Text Available Segmentation is employed in everyday image processing, in order to remove unwanted objects present in the image. There are scenarios where segmentation alone does not do the intended job automatically. In such cases, subjective means are required to eliminate the remnants which are time consuming especially when multiple images are involved. It is also not feasible when real-time applications are involved. This is even compounded when thermal imaging is involved as both foreground and background objects can have similar thermal distribution, thus making it impossible for straight segmentation to distinguish between the two. In this study, a real-time Iterative Elimination Algorithm (IEA was developed and it was shown that false foreground was removed in thermal images where segmentation failed to do so. The algorithm was tested on thermal images that were segmented using the inter-variance thresholding. The thermal images contained human subjects as foreground with some background objects having similar thermal distribution as the subject. Informed consent was obtained from the subject that voluntarily took part in the study. The IEA was only tested on thermal images and failed when false background object was connected to the foreground after segmentation.

  20. Support Routines for In Situ Image Processing (United States)

    Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean


    This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the

  1. A representation for mammographic image processing. (United States)

    Highnam, R; Brady, M; Shepstone, B


    Mammographic image analysis is typically performed using standard, general-purpose algorithms. We note the dangers of this approach and show that an alternative physics-model-based approach can be developed to calibrate the mammographic imaging process. This enables us to obtain, at each pixel, a quantitative measure of the breast tissue. The measure we use is h(int) and this represents the thickness of 'interesting' (non-fat) tissue between the pixel and the X-ray source. The thicknesses over the image constitute what we term the h(int) representation, and it can most usefully be regarded as a surface that conveys information about the anatomy of the breast. The representation allows image enhancement through removing the effects of degrading factors, and also effective image normalization since all changes in the image due to variations in the imaging conditions have been removed. Furthermore, the h(int) representation gives us a basis upon which to build object models and to reason about breast anatomy. We use this ability to choose features that are robust to breast compression and variations in breast composition. In this paper we describe the h(int) representation, show how it can be computed, and then illustrate how it can be applied to a variety of mammographic image processing tasks. The breast thickness turns out to be a key parameter in the computation of h(int), but it is not normally recorded. We show how the breast thickness can be estimated from an image, and examine the sensitivity of h(int) to this estimate. We then show how we can simulate any projective X-ray examination and can simulate the appearance of anatomical structures within the breast. We follow this with a comparison between the h(int) representation and conventional representations with respect to invariance to imaging conditions and the surrounding tissue. Initial results indicate that image analysis is far more robust when specific consideration is taken of the imaging process and

  2. Dictionary of computer vision and image processing

    CERN Document Server

    Fisher, Robert B; Dawson-Howe, Kenneth; Fitzgibbon, Andrew; Robertson, Craig; Trucco, Emanuele; Williams, Christopher K I


    Written by leading researchers, the 2nd Edition of the Dictionary of Computer Vision & Image Processing is a comprehensive and reliable resource which now provides explanations of over 3500 of the most commonly used terms across image processing, computer vision and related fields including machine vision. It offers clear and concise definitions with short examples or mathematical precision where necessary for clarity that ultimately makes it a very usable reference for new entrants to these fields at senior undergraduate and graduate level, through to early career researchers to help build u

  3. Practical image and video processing using MATLAB

    CERN Document Server

    Marques, Oge


    "The book provides a practical introduction to the most important topics in image and video processing using MATLAB (and its Image Processing Toolbox) as a tool to demonstrate the most important techniques and algorithms. The contents are presented in a clear, technically accurate, objective way, with just enough mathematical detail. Most of the chapters are supported by figures, examples, illustrative problems, MATLAB scripts, suggestions for further reading, bibliographical references, useful Web sites, and exercises and computer projects to extend the understanding of their contents"--

  4. Processing Images of Craters for Spacecraft Navigation (United States)

    Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.


    A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.

  5. Hardware implementation of machine vision systems: image and video processing (United States)

    Botella, Guillermo; García, Carlos; Meyer-Bäse, Uwe


    This contribution focuses on different topics covered by the special issue titled `Hardware Implementation of Machine vision Systems' including FPGAs, GPUS, embedded systems, multicore implementations for image analysis such as edge detection, segmentation, pattern recognition and object recognition/interpretation, image enhancement/restoration, image/video compression, image similarity and retrieval, satellite image processing, medical image processing, motion estimation, neuromorphic and bioinspired vision systems, video processing, image formation and physics based vision, 3D processing/coding, scene understanding, and multimedia.

  6. Onboard Image Processing System for Hyperspectral Sensor. (United States)

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun


    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost.




    In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963

  8. Simplified labeling process for medical image segmentation. (United States)

    Gao, Mingchen; Huang, Junzhou; Huang, Xiaolei; Zhang, Shaoting; Metaxas, Dimitris N


    Image segmentation plays a crucial role in many medical imaging applications by automatically locating the regions of interest. Typically supervised learning based segmentation methods require a large set of accurately labeled training data. However, thel labeling process is tedious, time consuming and sometimes not necessary. We propose a robust logistic regression algorithm to handle label outliers such that doctors do not need to waste time on precisely labeling images for training set. To validate its effectiveness and efficiency, we conduct carefully designed experiments on cervigram image segmentation while there exist label outliers. Experimental results show that the proposed robust logistic regression algorithms achieve superior performance compared to previous methods, which validates the benefits of the proposed algorithms.

  9. Conceptualization, Cognitive Process between Image and Word

    Directory of Open Access Journals (Sweden)

    Aurel Ion Clinciu


    Full Text Available The study explores the process of constituting and organizing the system of concepts. After a comparative analysis of image and concept, conceptualization is reconsidered through raising for discussion the relations of concept with image in general and with self-image mirrored in body schema in particular. Taking into consideration the notion of mental space, there is developed an articulated perspective on conceptualization which has the images of mental space at one pole and the categories of language and operations of thinking at the other pole. There are explored the explicative possibilities of the notion of Tversky’s diagrammatic space as an element which is necessary to understand the genesis of graphic behaviour and to define a new construct, graphic intelligence.

  10. Digital image processing of vascular angiograms (United States)

    Selzer, R. H.; Beckenbach, E. S.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.


    The paper discusses the estimation of the degree of atherosclerosis in the human femoral artery through the use of a digital image processing system for vascular angiograms. The film digitizer uses an electronic image dissector camera to scan the angiogram and convert the recorded optical density information into a numerical format. Another processing step involves locating the vessel edges from the digital image. The computer has been programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements are combined into an atherosclerosis index, which is found in a post-mortem study to correlate well with both visual and chemical estimates of atherosclerotic disease.

  11. Speckle pattern processing by digital image correlation

    Directory of Open Access Journals (Sweden)

    Gubarev Fedor


    Full Text Available Testing the method of speckle pattern processing based on the digital image correlation is carried out in the current work. Three the most widely used formulas of the correlation coefficient are tested. To determine the accuracy of the speckle pattern processing, test speckle patterns with known displacement are used. The optimal size of a speckle pattern template used for determination of correlation and corresponding the speckle pattern displacement is also considered in the work.

  12. Optimisation in signal and image processing

    CERN Document Server

    Siarry, Patrick


    This book describes the optimization methods most commonly encountered in signal and image processing: artificial evolution and Parisian approach; wavelets and fractals; information criteria; training and quadratic programming; Bayesian formalism; probabilistic modeling; Markovian approach; hidden Markov models; and metaheuristics (genetic algorithms, ant colony algorithms, cross-entropy, particle swarm optimization, estimation of distribution algorithms, and artificial immune systems).

  13. Image Processing in Amateur Astro-Photography

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 2. Image Processing in Amateur Astro-Photography. Anurag Garg. Classroom Volume 15 Issue 2 February 2010 pp 170-175. Fulltext. Click here to view fulltext PDF. Permanent link: ...

  14. Stochastic processes, estimation theory and image enhancement (United States)

    Assefi, T.


    An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.

  15. Limiting liability via high resolution image processing

    Energy Technology Data Exchange (ETDEWEB)

    Greenwade, L.E.; Overlin, T.K.


    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  16. Subband/transform functions for image processing (United States)

    Glover, Daniel


    Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.

  17. [Digital thoracic radiology: devices, image processing, limits]. (United States)

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E


    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing.

  18. Driver drowsiness detection using ANN image processing (United States)

    Vesselenyi, T.; Moca, S.; Rus, A.; Mitran, T.; Tătaru, B.


    The paper presents a study regarding the possibility to develop a drowsiness detection system for car drivers based on three types of methods: EEG and EOG signal processing and driver image analysis. In previous works the authors have described the researches on the first two methods. In this paper the authors have studied the possibility to detect the drowsy or alert state of the driver based on the images taken during driving and by analyzing the state of the driver’s eyes: opened, half-opened and closed. For this purpose two kinds of artificial neural networks were employed: a 1 hidden layer network and an autoencoder network.

  19. Illuminating magma shearing processes via synchrotron imaging (United States)

    Lavallée, Yan; Cai, Biao; Coats, Rebecca; Kendrick, Jackie E.; von Aulock, Felix W.; Wallace, Paul A.; Le Gall, Nolwenn; Godinho, Jose; Dobson, Katherine; Atwood, Robert; Holness, Marian; Lee, Peter D.


    Our understanding of geomaterial behaviour and processes has long fallen short due to inaccessibility into material as "something" happens. In volcanology, research strategies have increasingly sought to illuminate the subsurface of materials at all scales, from the use of muon tomography to image the inside of volcanoes to the use of seismic tomography to image magmatic bodies in the crust, and most recently, we have added synchrotron-based x-ray tomography to image the inside of material as we test it under controlled conditions. Here, we will explore some of the novel findings made on the evolution of magma during shearing. These will include observations and discussions of magma flow and failure as well as petrological reaction kinetics.

  20. Advances in iterative multigrid PIV image processing (United States)

    Scarano, F.; Riethmuller, M. L.


    An image-processing technique is proposed, which performs iterative interrogation of particle image velocimetry (PIV) recordings. The method is based on cross-correlation, enhancing the matching performances by means of a relative transformation between the interrogation areas. On the basis of an iterative prediction of the tracers motion, window offset and deformation are applied, accounting for the local deformation of the fluid continuum. In addition, progressive grid refinement is applied in order to maximise the spatial resolution. The performances of the method are analysed and compared with the conventional cross correlation with and without the effect of a window discrete offset. The assessment of performance through synthetic PIV images shows that a remarkable improvement can be obtained in terms of precision and dynamic range. Moreover, peak-locking effects do not affect the method in practice. The velocity gradient range accessed with the application of a relative window deformation (linear approximation) is significantly enlarged, as confirmed in the experimental results.

  1. Automatic image analysis of multicellular apoptosis process. (United States)

    Ziraldo, Riccardo; Link, Nichole; Abrams, John; Ma, Lan


    Apoptotic programmed cell death (PCD) is a common and fundamental aspect of developmental maturation. Image processing techniques have been developed to detect apoptosis at the single-cell level in a single still image, while an efficient algorithm to automatically analyze the temporal progression of apoptosis in a large population of cells is unavailable. In this work, we have developed an ImageJ-based program that can quantitatively analyze time-lapse microscopy movies of live tissues undergoing apoptosis with a fluorescent cellular marker, and subsequently extract the temporospatial pattern of multicellular response. The protocol is applied to characterize apoptosis of Drosophila wing epithelium cells at eclosion. Using natural anatomic structures as reference, we identify dynamic patterns in the progression of apoptosis within the wing tissue, which not only confirms the previously observed collective cell behavior from a quantitative perspective for the first time, but also reveals a plausible role played by the anatomic structures in Drosophila apoptosis.

  2. Sorting Olive Batches for the Milling Process Using Image Processing. (United States)

    Aguilera Puerto, Daniel; Martínez Gila, Diego Manuel; Gámez García, Javier; Gómez Ortega, Juan


    The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco). The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results.

  3. Experimental formation of Pb, Sn, Ge and Sb sulfides, selenides and chlorides in the presence of sal ammoniac: A contribution to the understanding of the mineral formation processes in coal wastes self-ignition

    Czech Academy of Sciences Publication Activity Database

    Laufek, F.; Veselovský, F.; Drábek, M.; Kříbek, B.; Klementová, Mariana

    176-177, May (2017), s. 1-7 ISSN 0166-5162 Institutional support: RVO:68378271 Keywords : coal wastes * metalloids * mineral formation * self-burning processes Subject RIV: DB - Geology ; Mineralogy Impact factor: 4.783, year: 2016

  4. Processing images with programming language Halide




    The thesis contains a presentation of a recently created programming language Halide and its comparison to an already established image processing library OpenCV. We compare the execution times of the implementations with the same functionality and their length (in terms of number of lines). The implementations consist of morphological operations and template matching. Operations are implemented in four versions. The first version is made in C++ and only uses OpenCV’s objects. The second ...

  5. Digital image processing for information extraction. (United States)

    Billingsley, F. C.


    The modern digital computer has made practical image processing techniques for handling nonlinear operations in both the geometrical and the intensity domains, various types of nonuniform noise cleanup, and the numerical analysis of pictures. An initial requirement is that a number of anomalies caused by the camera (e.g., geometric distortion, MTF roll-off, vignetting, and nonuniform intensity response) must be taken into account or removed to avoid their interference with the information extraction process. Examples illustrating these operations are discussed along with computer techniques used to emphasize details, perform analyses, classify materials by multivariate analysis, detect temporal differences, and aid in human interpretation of photos.

  6. Phase Superposition Processing for Ultrasonic Imaging (United States)

    Tao, L.; Ma, X. R.; Tian, H.; Guo, Z. X.


    In order to improve the resolution of defect reconstruction for non-destructive evaluation, a new phase superposition processing (PSP) method has been developed on the basis of a synthetic aperture focusing technique (SAFT). The proposed method synthesizes the magnitudes of phase-superposed delayed signal groups. A satisfactory image can be obtained by a simple algorithm processing time domain radio frequency signals directly. In this paper, the theory of PSP is introduced and some simulation and experimental results illustrating the advantage of PSP are given.

  7. Enhancement of Quality Learning: Capitalizing on the SAL Framework (United States)

    Phan, Huy


    Quality learning in higher education is an impetus and major objective for educators and researchers. The student approaches to learning (SAL) framework, arising from the seminal work of Marton and Säljö (1976), has been researched extensively and used to predict and explain students' positive (e.g., critical reflection) and maladaptive…

  8. Medical device SALs and surgical site infections: a mathematical model. (United States)

    Srun, Sopheak W; Nissen, Brian J; Bryans, Trabue D; Bonjean, Maxime


    It is commonly accepted that terminally sterilized healthcare products are rarely the source of a hospital-acquired infection (HAI). The vast majority of HAIs arise from human-borne contamination from the workforce, the clinical environment, less-than-aseptic handling techniques, and the patients themselves. Nonetheless, the requirement for a maximal sterility assurance level (SAL) of a terminally sterilized product has remained at 10(-6), which is the probability of one in one million that a single viable microorganism will be on a product after sterilization. This paper presents a probabilistic model that predicts choosing an SAL greater than 10(-6) (e.g. 10(-5) or 10(-4), and in some examples even 10(-3) or 10(-2)) does not have a statistically significant impact on the incidence of surgical site infections (SSIs). The use of a greater SAL might allow new, potentially life-saving products that cannot withstand sterilization to achieve a 10(-6) SAL to be terminally sterilized instead of being aseptically manufactured.

  9. Isolasi dan karakterisasi mutan sal4 di ragi (Saccharomyces cereviceae

    Directory of Open Access Journals (Sweden)

    Ni Nyoman Tri Puspaningsih


    Full Text Available Recently, genetics manipulation in yeast Saccharomyces cereviceae have much been done. It because yeast can be used as a host cell alternative in the forign protein expression, therefore information about fidelity from yeast should be studied. Preliminary study showed that sal4 gene has assumed to has a role in translation fidelity control and/or termination factor. To study the gene function, mutation in yeast BSC483/1a has been done by Ethylmethane sulphonate. Mutants wished are mutated at sal4 locus and have characteristic of both allosuppressor and omnipotent suppressor. Phenotype of alosuppresor mutants were indicated by white colour consistency in YPD and Y8 medium, temperature sensitivity, paremomcyn sensitivity and growth rate. Quantitatively, effectiveness as omnipotent suppressor has been done by using gene fuion between PGK and β-galaktosidase. The result showed that BSC483/1a strain could be mutated by Ethylmethane sulphonate 1% and produced eight allosuppressor mutants. Two of them (Number 8 and 10 have characteristic of temperature sensitivity, and the two others (Number 1 and 13 were mutated at sal4 gene locus. Characterize of sal4 mutants (1 and 13 didn't show temperature sensitive and have growth rate re;atively more slowly than the wild type. Mutant (number 13 could suppress nonsense mutation (realthrough at termination codon UAG with β-galaktosidase activity as amount 2.70 unit/ml.

  10. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Sensakovic, William F.; O' Dell, M.C.; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura [Florida Hospital, Imaging Administration, Orlando, FL (United States)


    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA{sup 2} by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image

  11. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs. (United States)

    Sensakovic, William F; O'Dell, M Cody; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura


    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA(2) by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image processing can

  12. MATLAB-Based Applications for Image Processing and Image Quality Assessment – Part I: Software Description

    Directory of Open Access Journals (Sweden)

    L. Krasula


    Full Text Available This paper describes several MATLAB-based applications useful for image processing and image quality assessment. The Image Processing Application helps user to easily modify images, the Image Quality Adjustment Application enables to create series of pictures with different quality. The Image Quality Assessment Application contains objective full reference quality metrics that can be used for image quality assessment. The Image Quality Evaluation Applications represent an easy way to compare subjectively the quality of distorted images with reference image. Results of these subjective tests can be processed by using the Results Processing Application. All applications provide Graphical User Interface (GUI for the intuitive usage.

  13. Facial Edema Evaluation Using Digital Image Processing

    Directory of Open Access Journals (Sweden)

    A. E. Villafuerte-Nuñez


    Full Text Available The main objective of the facial edema evaluation is providing the needed information to determine the effectiveness of the anti-inflammatory drugs in development. This paper presents a system that measures the four main variables present in facial edemas: trismus, blush (coloration, temperature, and inflammation. Measurements are obtained by using image processing and the combination of different devices such as a projector, a PC, a digital camera, a thermographic camera, and a cephalostat. Data analysis and processing are performed using MATLAB. Facial inflammation is measured by comparing three-dimensional reconstructions of inflammatory variations using the fringe projection technique. Trismus is measured by converting pixels to centimeters in a digitally obtained image of an open mouth. Blushing changes are measured by obtaining and comparing the RGB histograms from facial edema images at different times. Finally, temperature changes are measured using a thermographic camera. Some tests using controlled measurements of every variable are presented in this paper. The results allow evaluating the measurement system before its use in a real test, using the pain model approved by the US Food and Drug Administration (FDA, which consists in extracting the third molar to generate the facial edema.

  14. Portable EDITOR (PEDITOR): A portable image processing system. [satellite images (United States)

    Angelici, G.; Slye, R.; Ozga, M.; Ritter, P.


    The PEDITOR image processing system was created to be readily transferable from one type of computer system to another. While nearly identical in function and operation to its predecessor, EDITOR, PEDITOR employs additional techniques which greatly enhance its portability. These cover system structure and processing. In order to confirm the portability of the software system, two different types of computer systems running greatly differing operating systems were used as target machines. A DEC-20 computer running the TOPS-20 operating system and using a Pascal Compiler was utilized for initial code development. The remaining programmers used a Motorola Corporation 68000-based Forward Technology FT-3000 supermicrocomputer running the UNIX-based XENIX operating system and using the Silicon Valley Software Pascal compiler and the XENIX C compiler for their initial code development.

  15. Imprecise Arithmetic for Low Power Image Processing

    DEFF Research Database (Denmark)

    Albicocco, Pietro; Cardarilli, Gian Carlo; Nannarelli, Alberto


    Sometimes reducing the precision of a numerical processor, by introducing errors, can lead to significant performance (delay, area and power dissipation) improvements without compromising the overall quality of the processing. In this work, we show how to perform the two basic operations, additio...... and multiplication, in an imprecise manner by simplifying the hardware implementation. With the proposed ”sloppy” operations, we obtain a reduction in delay, area and power dissipation, and the error introduced is still acceptable for applications such as image processing.......Sometimes reducing the precision of a numerical processor, by introducing errors, can lead to significant performance (delay, area and power dissipation) improvements without compromising the overall quality of the processing. In this work, we show how to perform the two basic operations, addition...

  16. Development of the SOFIA Image Processing Tool (United States)

    Adams, Alexander N.


    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a Boeing 747SP carrying a 2.5 meter infrared telescope capable of operating between at altitudes of between twelve and fourteen kilometers, which is above more than 99 percent of the water vapor in the atmosphere. The ability to make observations above most water vapor coupled with the ability to make observations from anywhere, anytime, make SOFIA one of the world s premiere infrared observatories. SOFIA uses three visible light CCD imagers to assist in pointing the telescope. The data from these imagers is stored in archive files as is housekeeping data, which contains information such as boresight and area of interest locations. A tool that could both extract and process data from the archive files was developed.

  17. HYMOSS signal processing for pushbroom spectral imaging (United States)

    Ludwig, David E.


    The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.

  18. Advanced Color Image Processing and Analysis

    CERN Document Server


    This volume does much more than survey modern advanced color processing. Starting with a historical perspective on ways we have classified color, it sets out the latest numerical techniques for analyzing and processing colors, the leading edge in our search to accurately record and print what we see. The human eye perceives only a fraction of available light wavelengths, yet we live in a multicolor world of myriad shining hues. Colors rich in metaphorical associations make us “purple with rage” or “green with envy” and cause us to “see red.” Defining colors has been the work of centuries, culminating in today’s complex mathematical coding that nonetheless remains a work in progress: only recently have we possessed the computing capacity to process the algebraic matrices that reproduce color more accurately. With chapters on dihedral color and image spectrometers, this book provides technicians and researchers with the knowledge they need to grasp the intricacies of today’s color imaging.

  19. Dynamic deformation image de-blurring and image processing for digital imaging correlation measurement (United States)

    Guo, X.; Li, Y.; Suo, T.; Liu, H.; Zhang, C.


    This paper proposes a method for de-blurring of images captured in the dynamic deformation of materials. De-blurring is achieved based on the dynamic-based approach, which is used to estimate the Point Spread Function (PSF) during the camera exposure window. The deconvolution process involving iterative matrix calculations of pixels, is then performed on the GPU to decrease the time cost. Compared to the Gauss method and the Lucy-Richardson method, it has the best result of the image restoration. The proposed method has been evaluated by using the Hopkinson bar loading system. In comparison to the blurry image, the proposed method has successfully restored the image. It is also demonstrated from image processing applications that the de-blurring method can improve the accuracy and the stability of the digital imaging correlation measurement.

  20. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark


    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  1. Digital signal and image processing using Matlab

    CERN Document Server

    Blanchet , Gérard


    The most important theoretical aspects of Image and Signal Processing (ISP) for both deterministic and random signals, the theory being supported by exercises and computer simulations relating to real applications.   More than 200 programs and functions are provided in the MATLAB® language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject.  Following on from the first volume, this second installation takes a more practical stance, provi

  2. Digital signal and image processing using MATLAB

    CERN Document Server

    Blanchet , Gérard


    This fully revised and updated second edition presents the most important theoretical aspects of Image and Signal Processing (ISP) for both deterministic and random signals. The theory is supported by exercises and computer simulations relating to real applications. More than 200 programs and functions are provided in the MATLABÒ language, with useful comments and guidance, to enable numerical experiments to be carried out, thus allowing readers to develop a deeper understanding of both the theoretical and practical aspects of this subject. This fully revised new edition updates : - the

  3. Using Image Processing to Determine Emphysema Severity (United States)

    McKenzie, Alexander; Sadun, Alberto


    Currently X-rays and computerized tomography (CT) scans are used to detect emphysema, but other tests are required to accurately quantify the amount of lung that has been affected by the disease. These images clearly show if a patient has emphysema, but are unable by visual scan alone, to quantify the degree of the disease, as it presents as subtle, dark spots on the lung. Our goal is to use these CT scans to accurately diagnose and determine emphysema severity levels in patients. This will be accomplished by performing several different analyses of CT scan images of several patients representing a wide range of severity of the disease. In addition to analyzing the original CT data, this process will convert the data to one and two bit images and will then examine the deviation from a normal distribution curve to determine skewness. Our preliminary results show that this method of assessment appears to be more accurate and robust than the currently utilized methods, which involve looking at percentages of radiodensities in the air passages of the lung.

  4. Image processing to optimize wave energy converters (United States)

    Bailey, Kyle Marc-Anthony

    The world is turning to renewable energies as a means of ensuring the planet's future and well-being. There have been a few attempts in the past to utilize wave power as a means of generating electricity through the use of Wave Energy Converters (WEC), but only recently are they becoming a focal point in the renewable energy field. Over the past few years there has been a global drive to advance the efficiency of WEC. Placing a mechanical device either onshore or offshore that captures the energy within ocean surface waves to drive a mechanical device is how wave power is produced. This paper seeks to provide a novel and innovative way to estimate ocean wave frequency through the use of image processing. This will be achieved by applying a complex modulated lapped orthogonal transform filter bank to satellite images of ocean waves. The complex modulated lapped orthogonal transform filterbank provides an equal subband decomposition of the Nyquist bounded discrete time Fourier Transform spectrum. The maximum energy of the 2D complex modulated lapped transform subband is used to determine the horizontal and vertical frequency, which subsequently can be used to determine the wave frequency in the direction of the WEC by a simple trigonometric scaling. The robustness of the proposed method is provided by the applications to simulated and real satellite images where the frequency is known.

  5. Platform for distributed image processing and image retrieval (United States)

    Gueld, Mark O.; Thies, Christian J.; Fischer, Benedikt; Keysers, Daniel; Wein, Berthold B.; Lehmann, Thomas M.


    We describe a platform for the implementation of a system for content-based image retrieval in medical applications (IRMA). To cope with the constantly evolving medical knowledge, the platform offers a flexible feature model to store and uniformly access all feature types required within a multi-step retrieval approach. A structured generation history for each feature allows the automatic identification and re-use of already computed features. The platform uses directed acyclic graphs composed of processing steps and control elements to model arbitrary retrieval algorithms. This visually intuitive, data-flow oriented representation vastly improves the interdisciplinary communication between computer scientists and physicians during the development of new retrieval algorithms. The execution of the graphs is fully automated within the platform. Each processing step is modeled as a feature transformation. Due to a high degree of system transparency, both the implementation and the evaluation of retrieval algorithms are accelerated significantly. The platform uses a client-server architecture consisting of a central database, a central job scheduler, instances of a daemon service, and clients which embed user-implemented feature ansformations. Automatically distributed batch processing and distributed feature storage enable the cost-efficient use of an existing workstation cluster.

  6. Deformable Mirror Light Modulators For Image Processing (United States)

    Boysel, R. Mark; Florence, James M.; Wu, Wen-Rong


    The operational characteristics of deformable mirror device (DMD) spatial light modulators for image processing applications are presented. The two DMD pixel structures of primary interest are the torsion hinged pixel for amplitude modulation and the flexure hinged or piston element pixel for phase modulation. The optical response characteristics of these structures are described. Experimental results detailing the performance of the pixel structures and addressing architectures are presented and are compared with the analytical results. Special emphasis is placed on the specification, from the experimental data, of the basic device performance parameters of the different modulator types. These parameters include modulation range (contrast ratio and phase modulation depth), individual pixel response time, and full array address time. The performance characteristics are listed for comparison with those of other light modulators (LCLV, LCTV, and MOSLM) for applications in the input plane and Fourier plane of a conventional coherent optical image processing system. The strengths and weaknesses of the existing DMD modulators are assessed and the potential for performance improvements is outlined.

  7. Digital image processing an algorithmic approach with Matlab

    CERN Document Server

    Qidwai, Uvais


    Introduction to Image Processing and the MATLAB EnvironmentIntroduction Digital Image Definitions: Theoretical Account Image Properties MATLAB Algorithmic Account MATLAB CodeImage Acquisition, Types, and File I/OImage Acquisition Image Types and File I/O Basics of Color Images Other Color Spaces Algorithmic Account MATLAB CodeImage ArithmeticIntroduction Operator Basics Theoretical TreatmentAlgorithmic Treatment Coding ExamplesAffine and Logical Operations, Distortions, and Noise in ImagesIntroduction Affine Operations Logical Operators Noise in Images Distortions in ImagesAlgorithmic Account

  8. Memory, Art and Mourning: the Case of the 'Salón del Nunca Más' of Granada (Antioquia, Colombia

    Directory of Open Access Journals (Sweden)

    Elkin Rubiano Pinilla


    Full Text Available This document examines the work produced in the 'Salón del Nunca Más', located in Granada (Antioquia on the subject of collective memory. In this rural town, the 'Salón' has articulated different practices that, along with the construction of memory, have allowed survivors of violence and family members of killed and disappeared individuals to symbolize loss by means of public rituals. On the other hand, the article explores the visual settings that lay down the event, not only the exposure of violent events but the practices of the local community: what happens in the 'Salón', the journalistic covering (written press, the documental photography (Jesús Abad Colorado and the artistic work (Erika Diettes. For this purpose, archival material, a historical interdisciplinary approach, psychoanalysis and image and communication theories, as well as interviews is referenced through this article.

  9. A concise introduction to image processing using C++

    CERN Document Server

    Wang, Meiqing


    Image recognition has become an increasingly dynamic field with new and emerging civil and military applications in security, exploration, and robotics. Written by experts in fractal-based image and video compression, A Concise Introduction to Image Processing using C++ strengthens your knowledge of fundamentals principles in image acquisition, conservation, processing, and manipulation, allowing you to easily apply these techniques in real-world problems. The book presents state-of-the-art image processing methodology, including current industrial practices for image compression, image de-noi

  10. A Document Imaging Technique for Implementing Electronic Loan Approval Process

    National Research Council Canada - National Science Library

    J. Manikandan; C.S. Celin; V.M. Gayathri


    ...), research fields, crime investigation fields and military fields. In this paper, we proposed a document image processing technique, for establishing electronic loan approval process (E-LAP) [2...

  11. Effects of image processing on the detective quantum efficiency (United States)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na


    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  12. Intelligent elevator management system using image processing (United States)

    Narayanan, H. Sai; Karunamurthy, Vignesh; Kumar, R. Barath


    In the modern era, the increase in the number of shopping malls and industrial building has led to an exponential increase in the usage of elevator systems. Thus there is an increased need for an effective control system to manage the elevator system. This paper is aimed at introducing an effective method to control the movement of the elevators by considering various cases where in the location of the person is found and the elevators are controlled based on various conditions like Load, proximity etc... This method continuously monitors the weight limit of each elevator while also making use of image processing to determine the number of persons waiting for an elevator in respective floors. Canny edge detection technique is used to find out the number of persons waiting for an elevator. Hence the algorithm takes a lot of cases into account and locates the correct elevator to service the respective persons waiting in different floors.

  13. Simulink Component Recognition Using Image Processing

    Directory of Open Access Journals (Sweden)

    Ramya R


    Full Text Available ABSTRACT In early stages of engineering design pen-and-paper sketches are often used to quickly convey concepts and ideas. Free-form drawing is often preferable to using computer interfaces due to its ease of use fluidity and lack of constraints. The objective of this project is to create a trainable sketched Simulink component recognizer and classifying the individual Simulink components from the input block diagram. The recognized components will be placed on the new Simulink model window after which operations can be performed over them. Noise from the input image is removed by Median filter the segmentation process is done by K-means clustering algorithm and recognition of individual Simulink components from the input block diagram is done by Euclidean distance. The project aims to devise an efficient way to segment a control system block diagram into individual components for recognition.

  14. Knowledge-based approach to medical image processing monitoring (United States)

    Chameroy, Virginie; Aubry, Florent; Di Paola, Robert


    The clinical use of image processing requires both medical knowledge and expertise in image processing techniques. We have designed a knowledge-based interactive quantification support system (IQSS) to help the medical user in the use and evaluation of medical image processing, and in the development of specific protocols. As the user proceeds according to a heuristic and intuitive approach, our system is meant to work according to a similar behavior. At the basis of the reasoning of our monitoring system, there are the semantic features of an image and of image processing. These semantic features describe their intrinsic properties, and are not symbolic description of the image content. Their obtention requires modeling of medical image and of image processing procedures. Semantic interpretation function gives rules to obtain the values of the semantic features extracted from these models. Then, commonsense compatibility rules yield to compatibility criteria which are based on a partial order (a subsumption relationship) on image and image processing, enabling a comparison to be made between data available to be processed and appropriate image processing procedures. This knowledge-based approach makes IQSS modular, flexible and consequently well adapted to aid in the development and in the utilization of image processing methods for multidimensional and multimodality medical image quantification.

  15. Image processing of 2D resistivity data for imaging faults (United States)

    Nguyen, F.; Garambois, S.; Jongmans, D.; Pirard, E.; Loke, M. H.


    A methodology to locate automatically limits or boundaries between different geological bodies in 2D electrical tomography is proposed, using a crest line extraction process in gradient images. This method is applied on several synthetic models and on field data set acquired on three experimental sites during the European project PALEOSIS where trenches were dug. The results presented in this work are valid for electrical tomographies data collected with a Wenner-alpha array and computed with an l 1 norm (blocky inversion) as optimization method. For the synthetic cases, three geometric contexts are modelled: a vertical and a dipping fault juxtaposing two different geological formations and a step-like structure. A superficial layer can cover each geological structure. In these three situations, the method locates the synthetic faults and layer boundaries, and determines fault displacement but with several limitations. The estimated fault positions correlate exactly with the synthetic ones if a conductive (or no superficial) layer overlies the studied structure. When a resistive layer with a thickness of 6 m covers the model, faults are positioned with a maximum error of 1 m. Moreover, when a resistive and/or a thick top layer is present, the resolution significantly decreases for the fault displacement estimation (error up to 150%). The tests with the synthetic models for surveys using the Wenner-alpha array indicate that the proposed methodology is best suited to vertical and horizontal contacts. Application of the methodology to real data sets shows that a lateral resistivity contrast of 1:5-1:10 leads to exact faults location. A fault contact with a resistivity contrast of 1:0.75 and overlaid by a resistive layer with a thickness of 1 m gives an error location ranging from 1 to 3 m. Moreover, no result is obtained for a contact with very low contrasts (˜1:0.85) overlaid by a resistive soil. The method shows poor results when vertical gradients are greater than

  16. Hyperspectral image representation and processing with binary partition trees


    Valero Valbuena, Silvia


    Premi extraordinari doctorat curs 2011-2012, àmbit Enginyeria de les TIC The optimal exploitation of the information provided by hyperspectral images requires the development of advanced image processing tools. Therefore, under the title Hyperspectral image representation and Processing with Binary Partition Trees, this PhD thesis proposes the construction and the processing of a new region-based hierarchical hyperspectral image representation: the Binary Partition Tree (BPT). This hierarc...

  17. Spot restoration for GPR image post-processing

    Energy Technology Data Exchange (ETDEWEB)

    Paglieroni, David W; Beer, N. Reginald


    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  18. Quaternion Fourier transforms for signal and image processing

    CERN Document Server

    Ell, Todd A; Sangwine, Stephen J


    Based on updates to signal and image processing technology made in the last two decades, this text examines the most recent research results pertaining to Quaternion Fourier Transforms. QFT is a central component of processing color images and complex valued signals. The book's attention to mathematical concepts, imaging applications, and Matlab compatibility render it an irreplaceable resource for students, scientists, researchers, and engineers.

  19. Sub-image data processing in Astro-WISE

    NARCIS (Netherlands)

    Mwebaze, Johnson; Boxhoorn, Danny; McFarland, John; Valentijn, Edwin A.

    Most often, astronomers are interested in a source (e.g., moving, variable, or extreme in some colour index) that lies on a few pixels of an image. However, the classical approach in astronomical data processing is the processing of the entire image or set of images even when the sole source of

  20. Image analysis for ophthalmological diagnosis image processing of Corvis ST images using Matlab

    CERN Document Server

    Koprowski, Robert


    This monograph focuses on the use of analysis and processing methods for images from the Corvis® ST tonometer. The presented analysis is associated with the quantitative, repeatable and fully automatic evaluation of the response of the eye, eyeball and cornea to an air-puff. All the described algorithms were practically implemented in MATLAB®. The monograph also describes and provides the full source code designed to perform the discussed calculations. As a result, this monograph is intended for scientists, graduate students and students of computer science and bioengineering as well as doctors wishing to expand their knowledge of modern diagnostic methods assisted by various image analysis and processing methods.

  1. A Document Imaging Technique for Implementing Electronic Loan Approval Process

    Directory of Open Access Journals (Sweden)

    J. Manikandan


    Full Text Available The image processing is one of the leading technologies of computer applications. Image processing is a type of signal processing, the input for image processor is an image or video frame and the output will be an image or subset of image [1]. Computer graphics and computer vision process uses an image processing techniques. Image processing systems are used in various environments like medical fields, computer-aided design (CAD, research fields, crime investigation fields and military fields. In this paper, we proposed a document image processing technique, for establishing electronic loan approval process (E-LAP [2]. Loan approval process has been tedious process, the E-LAP system attempts to reduce the complexity of loan approval process. Customers have to login to fill the loan application form online with all details and submit the form. The loan department then processes the submitted form and then sends an acknowledgement mail via the E-LAP to the requested customer with the details about list of documents required for the loan approval process [3]. The approaching customer can upload the scanned copies of all required documents. All this interaction between customer and bank take place using an E-LAP system.

  2. Interactive image processing for mobile devices (United States)

    Shaw, Rodney


    As the number of consumer digital images escalates by tens of billions each year, an increasing proportion of these images are being acquired using the latest generations of sophisticated mobile devices. The characteristics of the cameras embedded in these devices now yield image-quality outcomes that approach those of the parallel generations of conventional digital cameras, and all aspects of the management and optimization of these vast new image-populations become of utmost importance in providing ultimate consumer satisfaction. However this satisfaction is still limited by the fact that a substantial proportion of all images are perceived to have inadequate image quality, and a lesser proportion of these to be completely unacceptable (for sharing, archiving, printing, etc). In past years at this same conference, the author has described various aspects of a consumer digital-image interface based entirely on an intuitive image-choice-only operation. Demonstrations have been given of this facility in operation, essentially allowing criticalpath navigation through approximately a million possible image-quality states within a matter of seconds. This was made possible by the definition of a set of orthogonal image vectors, and defining all excursions in terms of a fixed linear visual-pixel model, independent of the image attribute. During recent months this methodology has been extended to yield specific user-interactive image-quality solutions in the form of custom software, which at less than 100kb is readily embedded in the latest generations of unlocked portable devices. This has also necessitated the design of new user-interfaces and controls, as well as streamlined and more intuitive versions of the user quality-choice hierarchy. The technical challenges and details will be described for these modified versions of the enhancement methodology, and initial practical experience with typical images will be described.

  3. Multiscale image processing and antiscatter grids in digital radiography. (United States)

    Lo, Winnie Y; Hornof, William J; Zwingenberger, Allison L; Robertson, Ian D


    Scatter radiation is a source of noise and results in decreased signal-to-noise ratio and thus decreased image quality in digital radiography. We determined subjectively whether a digitally processed image made without a grid would be of similar quality to an image made with a grid but without image processing. Additionally the effects of exposure dose and of a using a grid with digital radiography on overall image quality were studied. Thoracic and abdominal radiographs of five dogs of various sizes were made. Four acquisition techniques were included (1) with a grid, standard exposure dose, digital image processing; (2) without a grid, standard exposure dose, digital image processing; (3) without a grid, half the exposure dose, digital image processing; and (4) with a grid, standard exposure dose, no digital image processing (to mimic a film-screen radiograph). Full-size radiographs as well as magnified images of specific anatomic regions were generated. Nine reviewers rated the overall image quality subjectively using a five-point scale. All digitally processed radiographs had higher overall scores than nondigitally processed radiographs regardless of patient size, exposure dose, or use of a grid. The images made at half the exposure dose had a slightly lower quality than those made at full dose, but this was only statistically significant in magnified images. Using a grid with digital image processing led to a slight but statistically significant increase in overall quality when compared with digitally processed images made without a grid but whether this increase in quality is clinically significant is unknown.

  4. Image processing and enhancement provided by commercial dental software programs

    National Research Council Canada - National Science Library

    Lehmann, T M; Troeltsch, E; Spitzer, K


    To identify and analyse methods/algorithms for image processing provided by various commercial software programs used in direct digital dental imaging and to map them onto a standardized nomenclature...

  5. Video image processing to create a speed sensor (United States)


    Image processing has been applied to traffic analysis in recent years, with different goals. In the report, a new approach is presented for extracting vehicular speed information, given a sequence of real-time traffic images. We extract moving edges ...

  6. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato


    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  7. Viewpoints on Medical Image Processing: From Science to Application (United States)

    Deserno (né Lehmann), Thomas M.; Handels, Heinz; Maier-Hein (né Fritzsche), Klaus H.; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas


    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. PMID:24078804

  8. Tsunami vulnerability and damage assessment in the coastal area of Rabat and Salé, Morocco

    Directory of Open Access Journals (Sweden)

    A. Atillah


    Full Text Available This study, a companion paper to Renou et al. (2011, focuses on the application of a GIS-based method to assess building vulnerability and damage in the event of a tsunami affecting the coastal area of Rabat and Salé, Morocco. This approach, designed within the framework of the European SCHEMA project ( is based on the combination of hazard results from numerical modelling of the worst case tsunami scenario (inundation depth based on the historical Lisbon earthquake of 1755 and the Portugal earthquake of 1969, together with vulnerability building types derived from Earth Observation data, field surveys and GIS data. The risk is then evaluated for this highly concentrated population area characterized by the implementation of a vast project of residential and touristic buildings within the flat area of the Bouregreg Valley separating the cities of Rabat and Salé. A GIS tool is used to derive building damage maps by crossing layers of inundation levels and building vulnerability. The inferred damage maps serve as a base for elaborating evacuation plans with appropriate rescue and relief processes and to prepare and consider appropriate measures to prevent the induced tsunami risk.

  9. Method development for verification the completeancient statues by image processing


    Natthariya Laopracha; Umaporn Saisangjan; Rapeeporn Chamchong


    Ancient statues are cultural heritages that should be preserved and maintained. Nevertheless, such invaluable statues may be targeted by vandalism or burglary. In order to guard these statues by using image processing, this research aims to develop a technique for detecting images of ancient statues with missing parts using digital image processing. This paper proposed the effective feature extraction method for detecting images of damaged statues or statues with missing parts based on the Hi...

  10. Histopathological Image Analysis Using Image Processing Techniques: An Overview


    A. D. Belsare; M.M. Mushrif


    This paper reviews computer assisted histopathology image analysis for cancer detection and classification. Histopathology refers to the examination of invasive or less invasive biopsy sample by a pathologist under microscope for locating, analyzing and classifying most of the diseases like cancer. The analysis of histoapthological image is done manually by the pathologist to detect disease which leads to subjective diagnosis of sample and varies with level of expertise of examine...

  11. Effects of image processing on the detective quantum efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na [Yonsei University, Wonju (Korea, Republic of)


    The evaluation of image quality is an important part of digital radiography. The modulation transfer function (MTF), the noise power spectrum (NPS), and the detective quantum efficiency (DQE) are widely accepted measurements of the digital radiographic system performance. However, as the methodologies for such characterization have not been standardized, it is difficult to compare directly reported the MTF, NPS, and DQE results. In this study, we evaluated the effect of an image processing algorithm for estimating the MTF, NPS, and DQE. The image performance parameters were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) posterior-anterior (PA) images of a hand for measuring the signal to noise ratio (SNR), the slit images for measuring the MTF, and the white images for measuring the NPS were obtained, and various multi-Scale image contrast amplification (MUSICA) factors were applied to each of the acquired images. All of the modifications of the images obtained by using image processing had a considerable influence on the evaluated image quality. In conclusion, the control parameters of image processing can be accounted for evaluating characterization of image quality in same way. The results of this study should serve as a baseline for based on evaluating imaging systems and their imaging characteristics by MTF, NPS, and DQE measurements.

  12. Viking image processing. [digital stereo imagery and computer mosaicking (United States)

    Green, W. B.


    The paper discusses the camera systems capable of recording black and white and color imagery developed for the Viking Lander imaging experiment. Each Viking Lander image consisted of a matrix of numbers with 512 rows and an arbitrary number of columns up to a maximum of about 9,000. Various techniques were used in the processing of the Viking Lander images, including: (1) digital geometric transformation, (2) the processing of stereo imagery to produce three-dimensional terrain maps, and (3) computer mosaicking of distinct processed images. A series of Viking Lander images is included.

  13. Image processing and analysis with graphs theory and practice

    CERN Document Server

    Lézoray, Olivier


    Covering the theoretical aspects of image processing and analysis through the use of graphs in the representation and analysis of objects, Image Processing and Analysis with Graphs: Theory and Practice also demonstrates how these concepts are indispensible for the design of cutting-edge solutions for real-world applications. Explores new applications in computational photography, image and video processing, computer graphics, recognition, medical and biomedical imaging With the explosive growth in image production, in everything from digital photographs to medical scans, there has been a drast

  14. FunImageJ: a Lisp framework for scientific image processing. (United States)

    Harrington, Kyle I S; Rueden, Curtis T; Eliceiri, Kevin W


    FunImageJ is a Lisp framework for scientific image processing built upon the ImageJ software ecosystem. The framework provides a natural functional-style for programming, while accounting for the performance requirements necessary in big data processing commonly encountered in biological image analysis. Freely available plugin to Fiji ( Installation and use instructions available at ( Supplementary data are available at Bioinformatics online.

  15. Survey on Neural Networks Used for Medical Image Processing. (United States)

    Shi, Zhenghao; He, Lifeng; Suzuki, Kenji; Nakamura, Tsuyoshi; Itoh, Hidenori


    This paper aims to present a review of neural networks used in medical image processing. We classify neural networks by its processing goals and the nature of medical images. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of neural network application for medical image processing and an outlook for the future research are also discussed. By this survey, we try to answer the following two important questions: (1) What are the major applications of neural networks in medical image processing now and in the nearby future? (2) What are the major strengths and weakness of applying neural networks for solving medical image processing tasks? We believe that this would be very helpful researchers who are involved in medical image processing with neural network techniques.

  16. Medical image processing on the GPU - past, present and future. (United States)

    Eklund, Anders; Dufort, Paul; Forsberg, Daniel; LaConte, Stephen M


    Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Survey on Neural Networks Used for Medical Image Processing


    Shi, Zhenghao; He, Lifeng; Suzuki, Kenji; Nakamura, Tsuyoshi; Itoh, Hidenori


    This paper aims to present a review of neural networks used in medical image processing. We classify neural networks by its processing goals and the nature of medical images. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of neural network application for medical image processing and an outlook for the future research are also discussed. By this survey, we try to answer the following two important questions: (1) Wh...

  18. Application of image processing technology in yarn hairiness detection


    Zhang, Guohong; Binjie XIN


    Digital image processing technology is one of the new methods for yarn detection, which can realize the digital characterization and objective evaluation of yarn appearance. This paper overviews the current status of development and application of digital image processing technology used for yarn hairiness evaluation, and analyzes and compares the traditional detection methods and this new developed method. Compared with the traditional methods, the image processing technology based method is...

  19. Optimizing signal and image processing applications using Intel libraries (United States)

    Landré, Jérôme; Truchetet, Frédéric


    This paper presents optimized signal and image processing libraries from Intel Corporation. Intel Performance Primitives (IPP) is a low-level signal and image processing library developed by Intel Corporation to optimize code on Intel processors. Open Computer Vision library (OpenCV) is a high-level library dedicated to computer vision tasks. This article describes the use of both libraries to build flexible and efficient signal and image processing applications.

  20. GStreamer as a framework for image processing applications in image fusion (United States)

    Burks, Stephen D.; Doe, Joshua M.


    Multiple source band image fusion can sometimes be a multi-step process that consists of several intermediate image processing steps. Typically, each of these steps is required to be in a particular arrangement in order to produce a unique output image. GStreamer is an open source, cross platform multimedia framework, and using this framework, engineers at NVESD have produced a software package that allows for real time manipulation of processing steps for rapid prototyping in image fusion.

  1. Sliding mean edge estimation. [in digital image processing (United States)

    Ford, G. E.


    A method for determining the locations of the major edges of objects in digital images is presented. The method is based on an algorithm utilizing maximum likelihood concepts. An image line-scan interval is processed to determine if an edge exists within the interval and its location. The proposed algorithm has demonstrated good results even in noisy images.

  2. Experiences with digital processing of images at INPE (United States)

    Mascarenhas, N. D. A. (Principal Investigator)


    Four different research experiments with digital image processing at INPE will be described: (1) edge detection by hypothesis testing; (2) image interpolation by finite impulse response filters; (3) spatial feature extraction methods in multispectral classification; and (4) translational image registration by sequential tests of hypotheses.

  3. A color image processing pipeline for digital microscope (United States)

    Liu, Yan; Liu, Peng; Zhuang, Zhefeng; Chen, Enguo; Yu, Feihong


    Digital microscope has found wide application in the field of biology, medicine et al. A digital microscope differs from traditional optical microscope in that there is no need to observe the sample through an eyepiece directly, because the optical image is projected directly on the CCD/CMOS camera. However, because of the imaging difference between human eye and sensor, color image processing pipeline is needed for the digital microscope electronic eyepiece to get obtain fine image. The color image pipeline for digital microscope, including the procedures that convert the RAW image data captured by sensor into real color image, is of great concern to the quality of microscopic image. The color pipeline for digital microscope is different from digital still cameras and video cameras because of the specific requirements of microscopic image, which should have the characters of high dynamic range, keeping the same color with the objects observed and a variety of image post-processing. In this paper, a new color image processing pipeline is proposed to satisfy the requirements of digital microscope image. The algorithm of each step in the color image processing pipeline is designed and optimized with the purpose of getting high quality image and accommodating diverse user preferences. With the proposed pipeline implemented on the digital microscope platform, the output color images meet the various analysis requirements of images in the medicine and biology fields very well. The major steps of color imaging pipeline proposed include: black level adjustment, defect pixels removing, noise reduction, linearization, white balance, RGB color correction, tone scale correction and gamma correction.

  4. APPLEPIPS /Apple Personal Image Processing System/ - An interactive digital image processing system for the Apple II microcomputer (United States)

    Masuoka, E.; Rose, J.; Quattromani, M.


    Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.

  5. Breast image pre-processing for mammographic tissue segmentation. (United States)

    He, Wenda; Hogg, Peter; Juette, Arne; Denton, Erika R E; Zwiggelaar, Reyer


    During mammographic image acquisition, a compression paddle is used to even the breast thickness in order to obtain optimal image quality. Clinical observation has indicated that some mammograms may exhibit abrupt intensity change and low visibility of tissue structures in the breast peripheral areas. Such appearance discrepancies can affect image interpretation and may not be desirable for computer aided mammography, leading to incorrect diagnosis and/or detection which can have a negative impact on sensitivity and specificity of screening mammography. This paper describes a novel mammographic image pre-processing method to improve image quality for analysis. An image selection process is incorporated to better target problematic images. The processed images show improved mammographic appearances not only in the breast periphery but also across the mammograms. Mammographic segmentation and risk/density classification were performed to facilitate a quantitative and qualitative evaluation. When using the processed images, the results indicated more anatomically correct segmentation in tissue specific areas, and subsequently better classification accuracies were achieved. Visual assessments were conducted in a clinical environment to determine the quality of the processed images and the resultant segmentation. The developed method has shown promising results. It is expected to be useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Using quantum filters to process images of diffuse axonal injury (United States)

    Pineda Osorio, Mateo


    Some images corresponding to a diffuse axonal injury (DAI) are processed using several quantum filters such as Hermite Weibull and Morse. Diffuse axonal injury is a particular, common and severe case of traumatic brain injury (TBI). DAI involves global damage on microscopic scale of brain tissue and causes serious neurologic abnormalities. New imaging techniques provide excellent images showing cellular damages related to DAI. Said images can be processed with quantum filters, which accomplish high resolutions of dendritic and axonal structures both in normal and pathological state. Using the Laplacian operators from the new quantum filters, excellent edge detectors for neurofiber resolution are obtained. Image quantum processing of DAI images is made using computer algebra, specifically Maple. Quantum filter plugins construction is proposed as a future research line, which can incorporated to the ImageJ software package, making its use simpler for medical personnel.

  7. Advances in low-level color image processing

    CERN Document Server

    Smolka, Bogdan


    Color perception plays an important role in object recognition and scene understanding both for humans and intelligent vision systems. Recent advances in digital color imaging and computer hardware technology have led to an explosion in the use of color images in a variety of applications including medical imaging, content-based image retrieval, biometrics, watermarking, digital inpainting, remote sensing, visual quality inspection, among many others. As a result, automated processing and analysis of color images has become an active area of research, to which the large number of publications of the past two decades bears witness. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for single channel images are often not directly applicable to multichannel  ones. The goal of this volume is to summarize the state-of-the-art in the early stages of the color image processing pipeline.

  8. Topics in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato


      The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery.   Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation.   The book brings together the current state-of-the-art in the various mul...

  9. Image Processing on Morphological Traits of Grape Germplasm


    Shiraishi, Mikio; Shiraishi, Shinichi; Kurushima, Takashi


    The methods of image processing of grape plants was developed to make the description of morphological traits more accurate and effective. A plant image was taken with a still video camera and displayed through a digital to analog conversion. A highquality image was obtained by 500 TV pieces as a horizontal resolution, and in particular, the degree of density of prostrate hairs between mature leaf veins (lower surface). The analog image was stored in an optical disk to preserve semipermanentl...

  10. Advances and applications of optimised algorithms in image processing

    CERN Document Server

    Oliva, Diego


    This book presents a study of the use of optimization algorithms in complex image processing problems. The problems selected explore areas ranging from the theory of image segmentation to the detection of complex objects in medical images. Furthermore, the concepts of machine learning and optimization are analyzed to provide an overview of the application of these tools in image processing. The material has been compiled from a teaching perspective. Accordingly, the book is primarily intended for undergraduate and postgraduate students of Science, Engineering, and Computational Mathematics, and can be used for courses on Artificial Intelligence, Advanced Image Processing, Computational Intelligence, etc. Likewise, the material can be useful for research from the evolutionary computation, artificial intelligence and image processing co.

  11. [A novel image processing and analysis system for medical images based on IDL language]. (United States)

    Tang, Min


    Medical image processing and analysis system, which is of great value in medical research and clinical diagnosis, has been a focal field in recent years. Interactive data language (IDL) has a vast library of built-in math, statistics, image analysis and information processing routines, therefore, it has become an ideal software for interactive analysis and visualization of two-dimensional and three-dimensional scientific datasets. The methodology is proposed to design a novel image processing and analysis system for medical images based on IDL. There are five functional modules in this system: Image Preprocessing, Image Segmentation, Image Reconstruction, Image Measurement and Image Management. Experimental results demonstrate that this system is effective and efficient, and it has the advantages of extensive applicability, friendly interaction, convenient extension and favorable transplantation.

  12. Pyramidal Image-Processing Code For Hexagonal Grid (United States)

    Watson, Andrew B.; Ahumada, Albert J., Jr.


    Algorithm based on processing of information on intensities of picture elements arranged in regular hexagonal grid. Called "image pyramid" because image information at each processing level arranged in hexagonal grid having one-seventh number of picture elements of next lower processing level, each picture element derived from hexagonal set of seven nearest-neighbor picture elements in next lower level. At lowest level, fine-resolution of elements of original image. Designed to have some properties of image-coding scheme of primate visual cortex.

  13. The operation technology of realtime image processing system (Datacube)

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Lee, Yong Bum; Lee, Nam Ho; Choi, Young Soo; Park, Soon Yong; Park, Jin Seok


    In this project, a Sparc VME-based MaxSparc system, running the solaris operating environment, is selected as the dedicated image processing hardware for robot vision applications. In this report, the operation of Datacube maxSparc system, which is high performance realtime image processing hardware, is systematized. And image flow example programs for running MaxSparc system are studied and analyzed. The state-of-the-arts of Datacube system utilizations are studied and analyzed. For the next phase, advanced realtime image processing platform for robot vision application is going to be developed. (author). 19 refs., 71 figs., 11 tabs.

  14. Vacuum Switches Arc Images Pre–processing Based on MATLAB

    Directory of Open Access Journals (Sweden)

    Huajun Dong


    Full Text Available In order to filter out the noise effects of Vacuum Switches Arc(VSAimages, enhance the characteristic details of the VSA images, and improve the visual effects of VSA images, in this paper, the VSA images were implemented pre-processing such as noise removal, edge detection, processing of image’s pseudo color and false color, and morphological processing by MATLAB software. Furthermore, the morphological characteristics of the VSA images were extracted, including isopleths of the gray value, arc area and perimeter.

  15. IPL Processing of the Viking Orbiter Images of Mars (United States)

    Ruiz, R. M.; Elliott, D. A.; Yagi, G. M.; Pomphrey, R. B.; Power, M. A.; Farrell, W., Jr.; Lorre, J. J.; Benton, W. D.; Dewar, R. E.; Cullen, L. E.


    The Viking orbiter cameras returned over 9000 images of Mars during the 6-month nominal mission. Digital image processing was required to produce products suitable for quantitative and qualitative scientific interpretation. Processing included the production of surface elevation data using computer stereophotogrammetric techniques, crater classification based on geomorphological characteristics, and the generation of color products using multiple black-and-white images recorded through spectral filters. The Image Processing Laboratory of the Jet Propulsion Laboratory was responsible for the design, development, and application of the software required to produce these 'second-order' products.

  16. Monitoring Car Drivers' Condition Using Image Processing (United States)

    Adachi, Kazumasa; Yamamto, Nozomi; Yamamoto, Osami; Nakano, Tomoaki; Yamamoto, Shin

    We have developed a car driver monitoring system for measuring drivers' consciousness, with which we aim to reduce car accidents caused by drowsiness of drivers. The system consists of the following three subsystems: an image capturing system with a pulsed infrared CCD camera, a system for detecting blinking waveform by the images using a neural network with which we can extract images of face and eye areas, and a system for measuring drivers' consciousness analyzing the waveform with a fuzzy inference technique and others. The third subsystem extracts three factors from the waveform first, and analyzed them with a statistical method, while our previous system used only one factor. Our experiments showed that the three-factor method we used this time was more effective to measure drivers' consciousness than the one-factor method we described in the previous paper. Moreover, the method is more suitable for fitting parameters of the system to each individual driver.

  17. Interactive Digital Image Processing Investigation. Phase II. (United States)


    Information 7-81 7.7.2 ITRES Control Flow 7-85 7.7.3 Program Subroutine Description 7-87 Subroutine ACUSTS 7-87 Subroutine DSPMAPP 7-88... ACUSTS to accumulate statistics for total image DO for every field CALL ACUSTS to accumulate stats for field ENDDO ENDDO Calculate total image stats CALL...The subroutines developed for ITRES are described below: 1 Subroutine ACUSTS Purpose Accumulates field statistics Usage CALL ACUSTS (BUF

  18. Digital image sequence processing, compression, and analysis

    CERN Document Server

    Reed, Todd R



  19. The vision guidance and image processing of AGV (United States)

    Feng, Tongqing; Jiao, Bin


    Firstly, the principle of AGV vision guidance is introduced and the deviation and deflection angle are measured by image coordinate system. The visual guidance image processing platform is introduced. In view of the fact that the AGV guidance image contains more noise, the image has already been smoothed by a statistical sorting. By using AGV sampling way to obtain image guidance, because the image has the best and different threshold segmentation points. In view of this situation, the method of two-dimensional maximum entropy image segmentation is used to solve the problem. We extract the foreground image in the target band by calculating the contour area method and obtain the centre line with the least square fitting algorithm. With the help of image and physical coordinates, we can obtain the guidance information.

  20. Detection of optimum maturity of maize using image processing and ...

    African Journals Online (AJOL)

    ... green colorations of the maize leaves at maturity was used. Different color features were extracted from the image processing system (MATLAB) and used as inputs to the artificial neural network that classify different levels of maturity. Keywords: Maize, Maturity, CCD Camera, Image Processing, Artificial Neural Network ...

  1. Image Processing In Laser-Beam-Steering Subsystem (United States)

    Lesh, James R.; Ansari, Homayoon; Chen, Chien-Chung; Russell, Donald W.


    Conceptual design of image-processing circuitry developed for proposed tracking apparatus described in "Beam-Steering Subsystem For Laser Communication" (NPO-19069). In proposed system, desired frame rate achieved by "windowed" readout scheme in which only pixels containing and surrounding two spots read out and others skipped without being read. Image data processed rapidly and efficiently to achieve high frequency response.

  2. [Filing and processing systems of ultrasonic images in personal computers]. (United States)

    Filatov, I A; Bakhtin, D A; Orlov, A V


    The paper covers the software pattern for the ultrasonic image filing and processing system. The system records images on a computer display in real time or still, processes them by local filtration techniques, makes different measurements and stores the findings in the graphic database. It is stressed that the database should be implemented as a network version.

  3. Digital image processing for two-phase flow

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Young; Lim, Jae Yun [Cheju National University, Cheju (Korea, Republic of); No, Hee Cheon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)


    A photographic method to measure the key parameters of two-phase flow is realized by using a digital image processing technique. The 8 bit gray level and 256 x 256 pixels are used to generates the image data which is treated to get the parameters of two-phase flow. It is observed that the key parameters could be identified by treating data obtained by the digital image processing technique.

  4. Surface Distresses Detection of Pavement Based on Digital Image Processing


    Ouyang, Aiguo; Luo, Chagen; Zhou, Chao


    International audience; Pavement crack is the main form of early diseases of pavement. The use of digital photography to record pavement images and subsequent crack detection and classification has undergone continuous improvements over the past decade. Digital image processing has been applied to detect the pavement crack for its advantages of large amount of information and automatic detection. The applications of digital image processing in pavement crack detection, distresses classificati...



    Sanjay B Patil; Dr Shrikant K Bodhe


    In order to increase the average sugarcane yield per acres with minimum cost farmers are adapting precision farming technique. This paper includes the area measurement of sugarcane leaf based on image processing method which is useful for plants growth monitoring, to analyze fertilizer deficiency and environmental stress,to measure diseases severity. In image processing method leaf area is calculated through pixel number statistic. Unit pixel in the same digital images represent the same size...

  6. Future trends in image processing software and hardware (United States)

    Green, W. B.


    JPL image processing applications are examined, considering future trends in fields such as planetary exploration, electronics, astronomy, computers, and Landsat. Attention is given to adaptive search and interrogation of large image data bases, the display of multispectral imagery recorded in many spectral channels, merging data acquired by a variety of sensors, and developing custom large scale integrated chips for high speed intelligent image processing user stations and future pipeline production processors.

  7. Large scale parallel document image processing

    NARCIS (Netherlands)

    van der Zant, Tijn; Schomaker, Lambert; Valentijn, Edwin; Yanikoglu, BA; Berkner, K


    Building a system which allows to search a very large database of document images. requires professionalization of hardware and software, e-science and web access. In astrophysics there is ample experience dealing with large data sets due to an increasing number of measurement instruments. The

  8. 8th International Image Processing and Communications Conference

    CERN Document Server


    This book collects a series of research papers in the area of Image Processing and Communications which not only introduce a summary of current technology but also give an outlook of potential feature problems in this area. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in image processing and communications. The book is divided into two parts and presents the proceedings of the 8th International Image Processing and Communications Conference (IP&C 2016) held in Bydgoszcz, Poland September 7-9 2016. Part I deals with image processing. A comprehensive survey of different methods of image processing, computer vision is also presented. Part II deals with the telecommunications networks and computer networks. Applications in these areas are considered.

  9. Implementing full backtracking facilities for Prolog-based image processing (United States)

    Jones, Andrew C.; Batchelor, Bruce G.


    PIP (Prolog image processing) is a system currently under development at UWCC, designed to support interactive image processing using the PROLOG programming language. In this paper we discuss Prolog-based image processing paradigms and present a meta-interpreter developed by the first author, designed to support an approach to image processing in PIP which is more in the spirit of Prolog than was previously possible. This meta-interpreter allows backtracking over image processing operations in a manner transparent to the programmer. Currently, for space-efficiency, the programmer needs to indicate over which operations the system may backtrack in a program; however, a number of extensions to the present work, including a more intelligent approach intended to obviate this need, are mentioned at the end of this paper, which the present meta-interpreter will provide a basis for investigating in the future.

  10. 6th International Image Processing and Communications Conference

    CERN Document Server


    This book collects a series of research papers in the area of Image Processing and Communications which not only introduce a summary of current technology but also give an outlook of potential feature problems in this area. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in image processing and communications. The book is divided into two parts and presents the proceedings of the 6th International Image Processing and Communications Conference (IP&C 2014) held in Bydgoszcz, 10-12 September 2014. Part I deals with image processing. A comprehensive survey of different methods  of image processing, computer vision  is also presented. Part II deals with the telecommunications networks and computer networks. Applications in these areas are considered.

  11. AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves (United States)

    Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.


    ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.

  12. Full Parallax Integral 3D Display and Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    Byung-Gook Lee


    Full Text Available Purpose – Full parallax integral 3D display is one of the promising future displays that provide different perspectives according to viewing direction. In this paper, the authors review the recent integral 3D display and image processing techniques for improving the performance, such as viewing resolution, viewing angle, etc.Design/methodology/approach – Firstly, to improve the viewing resolution of 3D images in the integral imaging display with lenslet array, the authors present 3D integral imaging display with focused mode using the time-multiplexed display. Compared with the original integral imaging with focused mode, the authors use the electrical masks and the corresponding elemental image set. In this system, the authors can generate the resolution-improved 3D images with the n×n pixels from each lenslet by using n×n time-multiplexed display. Secondly, a new image processing technique related to the elemental image generation for 3D scenes is presented. With the information provided by the Kinect device, the array of elemental images for an integral imaging display is generated.Findings – From their first work, the authors improved the resolution of 3D images by using the time-multiplexing technique through the demonstration of the 24 inch integral imaging system. Authors’ method can be applied to a practical application. Next, the proposed method with the Kinect device can gain a competitive advantage over other methods for the capture of integral images of big 3D scenes. The main advantage of fusing the Kinect and the integral imaging concepts is the acquisition speed, and the small amount of handled data.Originality / Value – In this paper, the authors review their recent methods related to integral 3D display and image processing technique.Research type – general review.

  13. SlideJ: An ImageJ plugin for automated processing of whole slide images.

    Directory of Open Access Journals (Sweden)

    Vincenzo Della Mea

    Full Text Available The digital slide, or Whole Slide Image, is a digital image, acquired with specific scanners, that represents a complete tissue sample or cytological specimen at microscopic level. While Whole Slide image analysis is recognized among the most interesting opportunities, the typical size of such images-up to Gpixels- can be very demanding in terms of memory requirements. Thus, while algorithms and tools for processing and analysis of single microscopic field images are available, Whole Slide images size makes the direct use of such tools prohibitive or impossible. In this work a plugin for ImageJ, named SlideJ, is proposed with the objective to seamlessly extend the application of image analysis algorithms implemented in ImageJ for single microscopic field images to a whole digital slide analysis. The plugin has been complemented by examples of macro in the ImageJ scripting language to demonstrate its use in concrete situations.

  14. Deep architecture neural network-based real-time image processing for image-guided radiotherapy. (United States)

    Mori, Shinichiro


    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  15. Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition (United States)

    Downie, John D.; Tucker, Deanne (Technical Monitor)


    Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

  16. Energy-Driven Image Interpolation Using Gaussian Process Regression

    Directory of Open Access Journals (Sweden)

    Lingling Zi


    Full Text Available Image interpolation, as a method of obtaining a high-resolution image from the corresponding low-resolution image, is a classical problem in image processing. In this paper, we propose a novel energy-driven interpolation algorithm employing Gaussian process regression. In our algorithm, each interpolated pixel is predicted by a combination of two information sources: first is a statistical model adopted to mine underlying information, and second is an energy computation technique used to acquire information on pixel properties. We further demonstrate that our algorithm can not only achieve image interpolation, but also reduce noise in the original image. Our experiments show that the proposed algorithm can achieve encouraging performance in terms of image visualization and quantitative measures.

  17. Acquisition and Post-Processing of Immunohistochemical Images. (United States)

    Sedgewick, Jerry


    Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.

  18. Entropy-Based Block Processing for Satellite Image Registration

    Directory of Open Access Journals (Sweden)

    Ikhyun Lee


    Full Text Available Image registration is an important task in many computer vision applications such as fusion systems, 3D shape recovery and earth observation. Particularly, registering satellite images is challenging and time-consuming due to limited resources and large image size. In such scenario, state-of-the-art image registration methods such as scale-invariant feature transform (SIFT may not be suitable due to high processing time. In this paper, we propose an algorithm based on block processing via entropy to register satellite images. The performance of the proposed method is evaluated using different real images. The comparative analysis shows that it not only reduces the processing time but also enhances the accuracy.

  19. Gaussian process interpolation for uncertainty estimation in image registration. (United States)

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William


    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods.

  20. Digital processing of stereoscopic image pairs. (United States)

    Levine, M. D.


    The problem under consideration is concerned with scene analysis during robot navigation on the surface of Mars. In this mode, the world model of the robot must be continuously updated to include sightings of new obstacles and scientific samples. In order to describe the content of a particular scene, it is first necessary to segment it into known objects. One technique for accomplishing this segmentation is by analyzing the pair of images produced by the stereoscopic cameras mounted on the robot. A heuristic method is presented for determining the range for each point in the two-dimensional scene under consideration. The method is conceptually based on a comparison of corresponding points in the left and right images of the stereo pair. However, various heuristics which are adaptive in nature are used to make the algorithm both efficient and accurate. Examples are given of the use of this so-called range picture for the purpose of scene segmentation.

  1. Effect of ectomycorrhizae on growth and establishment of sal (Shorea robusta seedlings in central India

    Directory of Open Access Journals (Sweden)



    Full Text Available Pyasi A, Soni KK, Verma RK. 2013. Effect of ectomycorrhizae on growth and establishment of sal (Shorea robusta seedlings in central India. Nusantara Bioscience 5: 44-49. The aim of the present study was to develop ectomycorrhiza in sal sapling at outside the sal growing areas. For this purpose sal seedling were raised at Jabalpur which is around 80 km away from natural sal forest (Motinala, MP. Seed sowing was done with inoculation of ectomycorrhizal inocula prepared by isolating the fungi from surface sterilised young basidiocarp of Lycoperdon compactum and Russula michiganensis. The inocula of ectomycorrhizal fungus were prepared in wheat grains treated with gypsum. The synthesis of ectomycorrhiza was observed in the sapling planted in the experimental field at Jabalpur with production of basidiocarp of Lycoperdon compactum near saplings. The mycorrhized saplings also showed higher growth indices.

  2. An application of image processing techniques in computed tomography image analysis

    DEFF Research Database (Denmark)

    McEvoy, Fintan


    An estimate of the thickness of subcutaneous adipose tissue at differing positions around the body was required in a study examining body composition. To eliminate human error associated with the manual placement of markers for measurements and to facilitate the collection of data from a large...... number of animals and image slices, automation of the process was desirable. The open-source and free image analysis program ImageJ was used. A macro procedure was created that provided the required functionality. The macro performs a number of basic image processing procedures. These include an initial...... process designed to remove the scanning table from the image and to center the animal in the image. This is followed by placement of a vertical line segment from the mid point of the upper border of the image to the image center. Measurements are made between automatically detected outer and inner...

  3. Image processing based detection of lung cancer on CT scan images (United States)

    Abdillah, Bariqi; Bustamam, Alhadi; Sarwinda, Devvi


    In this paper, we implement and analyze the image processing method for detection of lung cancer. Image processing techniques are widely used in several medical problems for picture enhancement in the detection phase to support the early medical treatment. In this research we proposed a detection method of lung cancer based on image segmentation. Image segmentation is one of intermediate level in image processing. Marker control watershed and region growing approach are used to segment of CT scan image. Detection phases are followed by image enhancement using Gabor filter, image segmentation, and features extraction. From the experimental results, we found the effectiveness of our approach. The results show that the best approach for main features detection is watershed with masking method which has high accuracy and robust.

  4. Color error in the digital camera image capture process. (United States)

    Penczek, John; Boynton, Paul A; Splett, Jolene D


    The color error in images taken by digital cameras is evaluated with respect to its sensitivity to the image capture conditions. A parametric study was conducted to investigate the dependence of image color error on camera technology, illumination spectra, and lighting uniformity. The measurement conditions were selected to simulate the variation that might be expected in typical telemedicine situations. Substantial color errors were observed, depending on the measurement conditions. Several image post-processing methods were also investigated for their effectiveness in reducing the color errors. The results of this study quantify the level of color error that may occur in the digital camera image capture process, and provide guidance for improving the color accuracy through appropriate changes in that process and in post-processing.

  5. Imaging Heat and Mass Transfer Processes Visualization and Analysis

    CERN Document Server

    Panigrahi, Pradipta Kumar


    Imaging Heat and Mass Transfer Processes: Visualization and Analysis applies Schlieren and shadowgraph techniques to complex heat and mass transfer processes. Several applications are considered where thermal and concentration fields play a central role. These include vortex shedding and suppression from stationary and oscillating bluff bodies such as cylinders, convection around crystals growing from solution, and buoyant jets. Many of these processes are unsteady and three dimensional. The interpretation and analysis of images recorded are discussed in the text.

  6. Optical image processing by using a photorefractive spatial soliton waveguide

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Bao-Lai, E-mail: [College of Physics Science & Technology, Hebei University, Baoding 071002 (China); Wang, Ying; Zhang, Su-Heng; Guo, Qing-Lin; Wang, Shu-Fang; Fu, Guang-Sheng [College of Physics Science & Technology, Hebei University, Baoding 071002 (China); Simmonds, Paul J. [Department of Physics and Micron School of Materials Science & Engineering, Boise State University, Boise, ID 83725 (United States); Wang, Zhao-Qi [Institute of Modern Optics, Nankai University, Tianjin 300071 (China)


    By combining the photorefractive spatial soliton waveguide of a Ce:SBN crystal with a coherent 4-f system we are able to manipulate the spatial frequencies of an input optical image to perform edge-enhancement and direct component enhancement operations. Theoretical analysis of this optical image processor is presented to interpret the experimental observations. This work provides an approach for optical image processing by using photorefractive spatial solitons. - Highlights: • A coherent 4-f system with the spatial soliton waveguide as spatial frequency filter. • Manipulate the spatial frequencies of an input optical image. • Achieve edge-enhancement and direct component enhancement operations of an optical image.

  7. Study of Wide Swath Synthetic Aperture Ladar Imaging Techology

    Directory of Open Access Journals (Sweden)

    Zhang Keshu


    Full Text Available Combining synthetic-aperture imaging and coherent-light detection technology, the weak signal identification capacity of Synthetic Aperture Ladar (SAL reaches the photo level, and the image resolution exceeds the diffraction limit of the telescope to obtain high-resolution images irrespective to ranges. This paper introduces SAL, including the development path, technology characteristics, and the restriction of imaging swath. On the basis of this, we propose to integrate the SAL technology for extending its swath. By analyzing the scanning-operation mode and the signal model, the paper explicitly proposes that the former mode will be the developmental trend of the SAL technology. This paper also introduces the flight demonstrations of the SAL and the imaging results of remote targets, showing the potential of the SAL in long-range, high-resolution, and scanning-imaging applications. The technology and the theory of the scanning mode of SAL compensates for the defects related to the swath and operation efficiency of the current SAL. It provides scientific foundation for the SAL system applied in wide swath, high resolution earth observation, and the ISAL system applied in space-targets imaging.

  8. High performance image processing of SPRINT

    Energy Technology Data Exchange (ETDEWEB)

    DeGroot, T. [Lawrence Livermore National Lab., CA (United States)


    This talk will describe computed tomography (CT) reconstruction using filtered back-projection on SPRINT parallel computers. CT is a computationally intensive task, typically requiring several minutes to reconstruct a 512x512 image. SPRINT and other parallel computers can be applied to CT reconstruction to reduce computation time from minutes to seconds. SPRINT is a family of massively parallel computers developed at LLNL. SPRINT-2.5 is a 128-node multiprocessor whose performance can exceed twice that of a Cray-Y/MP. SPRINT-3 will be 10 times faster. Described will be the parallel algorithms for filtered back-projection and their execution on SPRINT parallel computers.

  9. Detecting jaundice by using digital image processing (United States)

    Castro-Ramos, J.; Toxqui-Quitl, C.; Villa Manriquez, F.; Orozco-Guillen, E.; Padilla-Vivanco, A.; Sánchez-Escobar, JJ.


    When strong Jaundice is presented, babies or adults should be subject to clinical exam like "serum bilirubin" which can cause traumas in patients. Often jaundice is presented in liver disease such as hepatitis or liver cancer. In order to avoid additional traumas we propose to detect jaundice (icterus) in newborns or adults by using a not pain method. By acquiring digital images in color, in palm, soles and forehead, we analyze RGB attributes and diffuse reflectance spectra as the parameter to characterize patients with either jaundice or not, and we correlate that parameters with the level of bilirubin. By applying support vector machine we distinguish between healthy and sick patients.

  10. Poisson point processes imaging, tracking, and sensing

    CERN Document Server

    Streit, Roy L


    This overview of non-homogeneous and multidimensional Poisson point processes and their applications features mathematical tools and applications from emission- and transmission-computed tomography to multiple target tracking and distributed sensor detection.

  11. Automatic construction of image inspection algorithm by using image processing network programming (United States)

    Yoshimura, Yuichiro; Aoki, Kimiya


    In this paper, we discuss a method for automatic programming of inspection image processing. In the industrial field, automatic program generators or expert systems are expected to shorten a period required for developing a new appearance inspection system. So-called "image processing expert system" have been studied for over the nearly 30 years. We are convinced of the need to adopt a new idea. Recently, a novel type of evolutionary algorithms, called genetic network programming (GNP), has been proposed. In this study, we use GNP as a method to create an inspection image processing logic. GNP develops many directed graph structures, and shows excellent ability of formulating complex problems. We have converted this network program model to Image Processing Network Programming (IPNP). IPNP selects an appropriate image processing command based on some characteristics of input image data and processing log, and generates a visual inspection software with series of image processing commands. It is verified from experiments that the proposed method is able to create some inspection image processing programs. In the basic experiment with 200 test images, the success rate of detection of target region was 93.5%.

  12. Imaging and Controlling Ultrafast Ionization Processes (United States)

    Schafer, Kenneth


    We describe how the combination of an attosecond pulse train (APT) and a synchronized infrared (IR) laser field can be used to image and control ionization dynamics in atomic systems. In two recent experiments, attosecond pulses were used to create a sequence of electron wave packets (EWPs) near the ionization threshold in helium. In the first experiment^, the EWPs were created just below the ionization threshold, and the ionization probability was found to vary strongly with the IR/APT delay. Calculations that reproduce the experimental results demonstrate that this ionization control results from interference between transiently bound EWPs created by different pulses in the train. In the second experiment^, the APT was tailored to produce a sequence of identical EWPs just above the ionization threshold exactly once per laser cycle, allowing us to study a single ionization event stroboscopically. This technique has enabled us to image the coherent electron scattering that takes place when the IR field is sufficiently strong to reverse the initial direction of the electron motion causing it to re-scatter from its parent ion.^P. Johnsson, et al., PRL 99, 233001 (2007).^J. Mauritsson, et al. PRL, to appear (2008).In collaboration with A. L'Huillier, J. Mauritsson, P. Johnsson, T. Remetter, E. Mantsen, M. Swoboda, and T. Ruchon.

  13. Processing, analysis, recognition, and automatic understanding of medical images (United States)

    Tadeusiewicz, Ryszard; Ogiela, Marek R.


    Paper presents some new ideas introducing automatic understanding of the medical images semantic content. The idea under consideration can be found as next step on the way starting from capturing of the images in digital form as two-dimensional data structures, next going throw images processing as a tool for enhancement of the images visibility and readability, applying images analysis algorithms for extracting selected features of the images (or parts of images e.g. objects), and ending on the algorithms devoted to images classification and recognition. In the paper we try to explain, why all procedures mentioned above can not give us full satisfaction in many important medical problems, when we do need understand image semantic sense, not only describe the image in terms of selected features and/or classes. The general idea of automatic images understanding is presented as well as some remarks about the successful applications of such ideas for increasing potential possibilities and performance of computer vision systems dedicated to advanced medical images analysis. This is achieved by means of applying linguistic description of the picture merit content. After this we try use new AI methods to undertake tasks of the automatic understanding of images semantics in intelligent medical information systems. A successful obtaining of the crucial semantic content of the medical image may contribute considerably to the creation of new intelligent multimedia cognitive medical systems. Thanks to the new idea of cognitive resonance between stream of the data extracted form the image using linguistic methods and expectations taken from the representation of the medical knowledge, it is possible to understand the merit content of the image even if the form of the image is very different from any known pattern.

  14. Image Harvest: an open-source platform for high-throughput plant image processing and analysis (United States)

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal


    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  15. Image Harvest: an open-source platform for high-throughput plant image processing and analysis. (United States)

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal


    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  16. Evaluation of clinical image processing algorithms used in digital mammography. (United States)

    Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde


    Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the same six pairs of

  17. Application of image processing technology in yarn hairiness detection

    Directory of Open Access Journals (Sweden)

    Guohong ZHANG


    Full Text Available Digital image processing technology is one of the new methods for yarn detection, which can realize the digital characterization and objective evaluation of yarn appearance. This paper overviews the current status of development and application of digital image processing technology used for yarn hairiness evaluation, and analyzes and compares the traditional detection methods and this new developed method. Compared with the traditional methods, the image processing technology based method is more objective, fast and accurate, which is the vital development trend of the yarn appearance evaluation.

  18. Remote sensing models and methods for image processing

    CERN Document Server

    Schowengerdt, Robert A


    This book is a completely updated, greatly expanded version of the previously successful volume by the author. The Second Edition includes new results and data, and discusses a unified framework and rationale for designing and evaluating image processing algorithms.Written from the viewpoint that image processing supports remote sensing science, this book describes physical models for remote sensing phenomenology and sensors and how they contribute to models for remote-sensing data. The text then presents image processing techniques and interprets them in terms of these models. Spectral, s

  19. 1st International Conference on Computer Vision and Image Processing

    CERN Document Server

    Kumar, Sanjeev; Roy, Partha; Sen, Debashis


    This edited volume contains technical contributions in the field of computer vision and image processing presented at the First International Conference on Computer Vision and Image Processing (CVIP 2016). The contributions are thematically divided based on their relation to operations at the lower, middle and higher levels of vision systems, and their applications. The technical contributions in the areas of sensors, acquisition, visualization and enhancement are classified as related to low-level operations. They discuss various modern topics – reconfigurable image system architecture, Scheimpflug camera calibration, real-time autofocusing, climate visualization, tone mapping, super-resolution and image resizing. The technical contributions in the areas of segmentation and retrieval are classified as related to mid-level operations. They discuss some state-of-the-art techniques – non-rigid image registration, iterative image partitioning, egocentric object detection and video shot boundary detection. Th...

  20. An image-processing analysis of skin textures. (United States)

    Sparavigna, A; Marazzato, R


    This paper discusses an image-processing method applied to skin texture analysis. Considering that the characterisation of human skin texture is a task approached only recently by image processing, our goal is to lay out the benefits of this technique for quantitative evaluations of skin features and localisation of defects. We propose a method based on a statistical approach to image pattern recognition. The results of our statistical calculations on the grey-tone distributions of the images are proposed in specific diagrams, the coherence length diagrams. Using the coherence length diagrams, we were able to determine grain size and anisotropy of skin textures. Maps showing the localisation of defects are also proposed. According to the chosen statistical parameters of grey-tone distribution, several procedures to defect detection can be proposed. Here, we follow a comparison of the local coherence lengths with their average values. More sophisticated procedures, suggested by clinical experience, can be used to improve the image processing.

  1. Image and Sensor Data Processing for Target Acquisition and Recognition. (United States)


    reprisontativo d’images d’antratne- mont dout il connait la viriti terrain . Pour chacune des cibles do cec images, lordinateur calculera les n paramitres...l’objet, glissement limitd A sa lergeur. DOaprds las rdsultets obtenus jusqu’A meintenent, nous navons pas observE de glissement impor- tant et ATR> I TR...AEROSPACE RESEARCH AND DEVELOPMENT (ORGANISATION DU TRAITE DE L’ATLANTIQUE NORD) AGARDonferenceJoceedin io.290 IMAGE AND SENSOR DATA PROCESSING FOR TARGET

  2. Processing of hyperspectral medical images applications in dermatology using Matlab

    CERN Document Server

    Koprowski, Robert


    This book presents new methods of analyzing and processing hyperspectral medical images, which can be used in diagnostics, for example for dermatological images. The algorithms proposed are fully automatic and the results obtained are fully reproducible. Their operation was tested on a set of several thousands of hyperspectral images and they were implemented in Matlab. The presented source code can be used without licensing restrictions. This is a valuable resource for computer scientists, bioengineers, doctoral students, and dermatologists interested in contemporary analysis methods.

  3. Fast Transforms in Image Processing: Compression, Restoration, and Resampling

    Directory of Open Access Journals (Sweden)

    Leonid P. Yaroslavsky


    Full Text Available Transform image processing methods are methods that work in domains of image transforms, such as Discrete Fourier, Discrete Cosine, Wavelet, and alike. They proved to be very efficient in image compression, in image restoration, in image resampling, and in geometrical transformations and can be traced back to early 1970s. The paper reviews these methods, with emphasis on their comparison and relationships, from the very first steps of transform image compression methods to adaptive and local adaptive filters for image restoration and up to “compressive sensing” methods that gained popularity in last few years. References are made to both first publications of the corresponding results and more recent and more easily available ones. The review has a tutorial character and purpose.

  4. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab. (United States)

    Koprowski, Robert


    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Digital image processing and analysis for activated sludge wastewater treatment. (United States)

    Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed


    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.

  6. A new method of SC image processing for confluence estimation. (United States)

    Soleimani, Sajjad; Mirzaei, Mohsen; Toncu, Dana-Cristina


    Stem cells images are a strong instrument in the estimation of confluency during their culturing for therapeutic processes. Various laboratory conditions, such as lighting, cell container support and image acquisition equipment, effect on the image quality, subsequently on the estimation efficiency. This paper describes an efficient image processing method for cell pattern recognition and morphological analysis of images that were affected by uneven background. The proposed algorithm for enhancing the image is based on coupling a novel image denoising method through BM3D filter with an adaptive thresholding technique for improving the uneven background. This algorithm works well to provide a faster, easier, and more reliable method than manual measurement for the confluency assessment of stem cell cultures. The present scheme proves to be valid for the prediction of the confluency and growth of stem cells at early stages for tissue engineering in reparatory clinical surgery. The method used in this paper is capable of processing the image of the cells, which have already contained various defects due to either personnel mishandling or microscope limitations. Therefore, it provides proper information even out of the worst original images available. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. VICAR-DIGITAL image processing system (United States)

    Billingsley, F.; Bressler, S.; Friden, H.; Morecroft, J.; Nathan, R.; Rindfleisch, T.; Selzer, R.


    Computer program corrects various photometic, geometric and frequency response distortions in pictures. The program converts pictures to a number of elements, with each elements optical density quantized to a numerical value. The translated picture is recorded on magnetic tape in digital form for subsequent processing and enhancement by computer.

  8. Natural image statistics and visual processing

    NARCIS (Netherlands)

    van der Schaaf, Arjen


    The visual system of a human or animal that functions in its natural environment receives huge amounts of visual information. This information is vital for the survival of the organism. In this thesis I follow the hypothesis that evolution has optimised the biological visual system to process the

  9. A study of correlation technique on pyramid processed images

    Indian Academy of Sciences (India)

    The pyramid algorithm is potentially a powerful tool for advanced television image processing and for pattern recognition. An attempt is made to design and develop both hardware and software for a system which performs decomposition and reconstruction of digitized images by implementing the Burt pyramid algorithm.

  10. Image processing for drift compensation in fluorescence microscopy

    DEFF Research Database (Denmark)

    Petersen, Steffen; Thiagarajan, Viruthachalam; Coutinho, Isabel


    Fluorescence microscopy is characterized by low background noise, thus a fluorescent object appears as an area of high signal/noise. Thermal gradients may result in apparent motion of the object, leading to a blurred image. Here, we have developed an image processing methodology that may remove/r...

  11. A novel data processing technique for image reconstruction of penumbral imaging (United States)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin


    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  12. Processed images in human perception: A case study in ultrasound breast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yap, Moi Hoon [Department of Computer Science, Loughborough University, FH09, Ergonomics and Safety Research Institute, Holywell Park (United Kingdom)], E-mail:; Edirisinghe, Eran [Department of Computer Science, Loughborough University, FJ.05, Garendon Wing, Holywell Park, Loughborough LE11 3TU (United Kingdom); Bez, Helmut [Department of Computer Science, Loughborough University, Room N.2.26, Haslegrave Building, Loughborough University, Loughborough LE11 3TU (United Kingdom)


    Two main research efforts in early detection of breast cancer include the development of software tools to assist radiologists in identifying abnormalities and the development of training tools to enhance their skills. Medical image analysis systems, widely known as Computer-Aided Diagnosis (CADx) systems, play an important role in this respect. Often it is important to determine whether there is a benefit in including computer-processed images in the development of such software tools. In this paper, we investigate the effects of computer-processed images in improving human performance in ultrasound breast cancer detection (a perceptual task) and classification (a cognitive task). A survey was conducted on a group of expert radiologists and a group of non-radiologists. In our experiments, random test images from a large database of ultrasound images were presented to subjects. In order to gather appropriate formal feedback, questionnaires were prepared to comment on random selections of original images only, and on image pairs consisting of original images displayed alongside computer-processed images. We critically compare and contrast the performance of the two groups according to perceptual and cognitive tasks. From a Receiver Operating Curve (ROC) analysis, we conclude that the provision of computer-processed images alongside the original ultrasound images, significantly improve the perceptual tasks of non-radiologists but only marginal improvements are shown in the perceptual and cognitive tasks of the group of expert radiologists.

  13. Characterization of Periodically Poled Nonlinear Materials Using Digital Image Processing

    National Research Council Canada - National Science Library

    Alverson, James R


    .... A new approach based on image processing across an entire z+ or z- surface of a poled crystal allows for better quantification of the underlying domain structure and directly relates to device performance...

  14. Application of digital image processing techniques to astronomical imagery 1977 (United States)

    Lorre, J. J.; Lynn, D. J.


    Nine specific techniques of combination of techniques developed for applying digital image processing technology to existing astronomical imagery are described. Photoproducts are included to illustrate the results of each of these investigations.

  15. Mathematical methods in time series analysis and digital image processing

    CERN Document Server

    Kurths, J; Maass, P; Timmer, J


    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  16. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven


    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately...

  17. Applications of evolutionary computation in image processing and pattern recognition

    CERN Document Server

    Cuevas, Erik; Perez-Cisneros, Marco


    This book presents the use of efficient Evolutionary Computation (EC) algorithms for solving diverse real-world image processing and pattern recognition problems. It provides an overview of the different aspects of evolutionary methods in order to enable the reader in reaching a global understanding of the field and, in conducting studies on specific evolutionary techniques that are related to applications in image processing and pattern recognition. It explains the basic ideas of the proposed applications in a way that can also be understood by readers outside of the field. Image processing and pattern recognition practitioners who are not evolutionary computation researchers will appreciate the discussed techniques beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise on such areas. On the other hand, members of the evolutionary computation community can learn the way in which image processing and pattern recognition problems can be translated into an...

  18. The genetic audiogenic seizure hamster from Salamanca: The GASH:Sal. (United States)

    Muñoz, Luis J; Carballosa-Gautam, Melissa M; Yanowsky, Kira; García-Atarés, Natividad; López, Dolores E


    The hamster has been previously described as a paroxysmal dystonia model, but our strain is currently recognized as a model of audiogenic seizures (AGS). The original first epileptic hamster appeared spontaneously at the University of Valladolid, where it was known as the GPG:Vall line, and was transferred to the University of Salamanca where a new strain was developed, named GASH:Sal. By testing auditory brainstem responses, the GASH:Sal exhibits elevated auditory thresholds that indicate a hearing impairment. Moreover, amplified fragment length polymorphism analysis distinguished genetic differences between the susceptible GASH:Sal hamster strain and the control Syrian hamsters. The GASH:Sal constitutes an experimental model of reflex epilepsy of audiogenic origin derived from an autosomal recessive disorder. Thus, the GASH:Sal exhibits generalized tonic-clonic seizures, characterized by a short latency period after auditory stimulation, followed by wild running, a convulsive phase, and finally stupor, with origin in the brainstem. The seizure profile of the GASH:Sal is similar to those exhibited by other models of inherited AGS susceptibility, which decreases after six months of age, but the proneness across generations is maintained. The GASH:Sal can be considered a reliable model of audiogenic seizures, suitable to investigate current antiepileptic pharmaceutical treatments as well as novel therapeutic drugs. This article is part of a Special Issue entitled "Genetic and Reflex Epilepsies, Audiogenic Seizures and Strains: From Experimental Models to the Clinic". Copyright © 2016 Elsevier Inc. All rights reserved.

  19. The Digital Microscope and Its Image Processing Utility

    Directory of Open Access Journals (Sweden)

    Tri Wahyu Supardi


    Full Text Available Many institutions, including high schools, own a large number of analog or ordinary microscopes. These microscopes are used to observe small objects. Unfortunately, object observations on the ordinary microscope require precision and visual acuity of the user. This paper discusses the development of a high-resolution digital microscope from an analog microscope, including the image processing utility, which allows the digital microscope users to capture, store and process the digital images of the object being observed. The proposed microscope is constructed from hardware components that can be easily found in Indonesia. The image processing software is capable of performing brightness adjustment, contrast enhancement, histogram equalization, scaling and cropping. The proposed digital microscope has a maximum magnification of 1600x, and image resolution can be varied from 320x240 pixels up to 2592x1944 pixels. The microscope was tested with various objects with a variety of magnification, and image processing was carried out on the image of the object. The results showed that the digital microscope and its image processing system were capable of enhancing the observed object and other operations in accordance with the user need. The digital microscope has eliminated the need for direct observation by human eye as with the traditional microscope.

  20. Arabidopsis Growth Simulation Using Image Processing Technology

    Directory of Open Access Journals (Sweden)

    Junmei Zhang


    Full Text Available This paper aims to provide a method to represent the virtual Arabidopsis plant at each growth stage. It includes simulating the shape and providing growth parameters. The shape is described with elliptic Fourier descriptors. First, the plant is segmented from the background with the chromatic coordinates. With the segmentation result, the outer boundary series are obtained by using boundary tracking algorithm. The elliptic Fourier analysis is then carried out to extract the coefficients of the contour. The coefficients require less storage than the original contour points and can be used to simulate the shape of the plant. The growth parameters include total area and the number of leaves of the plant. The total area is obtained with the number of the plant pixels and the image calibration result. The number of leaves is derived by detecting the apex of each leaf. It is achieved by using wavelet transform to identify the local maximum of the distance signal between the contour points and the region centroid. Experiment result shows that this method can record the growth stage of Arabidopsis plant with fewer data and provide a visual platform for plant growth research.

  1. Parallel Computers for Region-Level Image Processing. (United States)


    It is well known that parallel computers can be used very effectively for image processing at the pixel level, by assigning a processor to each pixel...or block of pixels, and passing information as necessary between processors whose blocks are adjacent. This paper discusses the use of parallel ... computers for processing images at the region level, assigning a processor to each region and passing information between processors whose regions are

  2. Digital image processing for the earth resources technology satellite data. (United States)

    Will, P. M.; Bakis, R.; Wesley, M. A.


    This paper discusses the problems of digital processing of the large volumes of multispectral image data that are expected to be received from the ERTS program. Correction of geometric and radiometric distortions are discussed and a byte oriented implementation is proposed. CPU timing estimates are given for a System/360 Model 67, and show that a processing throughput of 1000 image sets per week is feasible.

  3. The Digital Microscope and Its Image Processing Utility


    Tri Wahyu Supardi; Agus Harjoko; Sri Hartati


    Many institutions, including high schools, own a large number of analog or ordinary microscopes. These microscopes are used to observe small objects. Unfortunately, object observations on the ordinary microscope require precision and visual acuity of the user. This paper discusses the development of a high-resolution digital microscope from an analog microscope, including the image processing utility, which allows the digital microscope users to capture, store and process the digital images o...

  4. Techniques and software architectures for medical visualisation and image processing


    Botha, C.P.


    This thesis presents a flexible software platform for medical visualisation and image processing, a technique for the segmentation of the shoulder skeleton from CT data and three techniques that make contributions to the field of direct volume rendering. Our primary goal was to investigate the use of visualisation techniques to assist the shoulder replacement process. This motivated the need for a flexible environment within which to test and develop new visualisation and also image processin...

  5. Automated measurement of pressure injury through image processing. (United States)

    Li, Dan; Mathews, Carol


    To develop an image processing algorithm to automatically measure pressure injuries using electronic pressure injury images stored in nursing documentation. Photographing pressure injuries and storing the images in the electronic health record is standard practice in many hospitals. However, the manual measurement of pressure injury is time-consuming, challenging and subject to intra/inter-reader variability with complexities of the pressure injury and the clinical environment. A cross-sectional algorithm development study. A set of 32 pressure injury images were obtained from a western Pennsylvania hospital. First, we transformed the images from an RGB (i.e. red, green and blue) colour space to a YCb Cr colour space to eliminate inferences from varying light conditions and skin colours. Second, a probability map, generated by a skin colour Gaussian model, guided the pressure injury segmentation process using the Support Vector Machine classifier. Third, after segmentation, the reference ruler - included in each of the images - enabled perspective transformation and determination of pressure injury size. Finally, two nurses independently measured those 32 pressure injury images, and intraclass correlation coefficient was calculated. An image processing algorithm was developed to automatically measure the size of pressure injuries. Both inter- and intra-rater analysis achieved good level reliability. Validation of the size measurement of the pressure injury (1) demonstrates that our image processing algorithm is a reliable approach to monitoring pressure injury progress through clinical pressure injury images and (2) offers new insight to pressure injury evaluation and documentation. Once our algorithm is further developed, clinicians can be provided with an objective, reliable and efficient computational tool for segmentation and measurement of pressure injuries. With this, clinicians will be able to more effectively monitor the healing process of pressure injuries

  6. Survey: interpolation methods for whole slide image processing. (United States)

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T


    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  7. Study of gray image pseudo-color processing algorithms (United States)

    Hu, Jinlong; Peng, Xianrong; Xu, Zhiyong

    In gray images which contain abundant information, if the differences between adjacent pixels' intensity are small, the required information can not be extracted by humans, since humans are more sensitive to color images than gray images. If gray images are transformed to pseudo-color images, the details of images will be more explicit, and the target will be recognized more easily. There are two methods (in frequency field and in spatial field) to realize pseudo-color enhancement of gray images. The first method is mainly the filtering in frequency field, and the second is the equal density pseudo-color coding methods which mainly include density segmentation coding, function transformation and complementary pseudo-color coding. Moreover, there are many other methods to realize pseudo-color enhancement, such as pixel's self-transformation based on RGB tri-primary, pseudo-color coding from phase-modulated image based on RGB color model, pseudo-color coding of high gray-resolution image, et al. However, above methods are tailored to a particular situation and transformations are based on RGB color space. In order to improve the visual effect, the method based on RGB color space and pixels' self-transformation is improved in this paper, which is based on HIS color space. Compared with other methods, some gray images with ordinary formats can be processed, and many gray images can be transformed to pseudo-color images with 24 bits. The experiment shows that the processed image has abundant levels, which is consistent with human's perception.

  8. Tumor image signatures and habitats: a processing pipeline of multimodality metabolic and physiological images. (United States)

    You, Daekeun; Kim, Michelle M; Aryal, Madhava P; Parmar, Hemant; Piert, Morand; Lawrence, Theodore S; Cao, Yue


    To create tumor "habitats" from the "signatures" discovered from multimodality metabolic and physiological images, we developed a framework of a processing pipeline. The processing pipeline consists of six major steps: (1) creating superpixels as a spatial unit in a tumor volume; (2) forming a data matrix [Formula: see text] containing all multimodality image parameters at superpixels; (3) forming and clustering a covariance or correlation matrix [Formula: see text] of the image parameters to discover major image "signatures;" (4) clustering the superpixels and organizing the parameter order of the [Formula: see text] matrix according to the one found in step 3; (5) creating "habitats" in the image space from the superpixels associated with the "signatures;" and (6) pooling and clustering a matrix consisting of correlation coefficients of each pair of image parameters from all patients to discover subgroup patterns of the tumors. The pipeline was applied to a dataset of multimodality images in glioblastoma (GBM) first, which consisted of 10 image parameters. Three major image "signatures" were identified. The three major "habitats" plus their overlaps were created. To test generalizability of the processing pipeline, a second image dataset from GBM, acquired on the scanners different from the first one, was processed. Also, to demonstrate the clinical association of image-defined "signatures" and "habitats," the patterns of recurrence of the patients were analyzed together with image parameters acquired prechemoradiation therapy. An association of the recurrence patterns with image-defined "signatures" and "habitats" was revealed. These image-defined "signatures" and "habitats" can be used to guide stereotactic tissue biopsy for genetic and mutation status analysis and to analyze for prediction of treatment outcomes, e.g., patterns of failure.

  9. Digital image processing of bone - Problems and potentials (United States)

    Morey, E. R.; Wronski, T. J.


    The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.

  10. Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images (United States)

    Rector, Travis A.; Levay, Zoltan G.; Frattare, Lisa M.; English, Jayanne; Pu'uohau-Pummill, Kirk


    The quality of modern astronomical data and the agility of current image-processing software enable the visualization of data in a way that exceeds the traditional definition of an astronomical image. Two developments in particular have led to a fundamental change in how astronomical images can be assembled. First, the availability of high-quality multiwavelength and narrowband data allow for images that do not correspond to the wavelength sensitivity of the human eye, thereby introducing ambiguity in the usage and interpretation of color. Second, many image-processing software packages now use a layering metaphor that allows for any number of astronomical data sets to be combined into a color image. With this technique, images with as many as eight data sets have been produced. Each data set is intensity-scaled and colorized independently, creating an immense parameter space that can be used to assemble the image. Since such images are intended for data visualization, scaling and color schemes must be chosen that best illustrate the science. A practical guide is presented on how to use the layering metaphor to generate publication-ready astronomical images from as many data sets as desired. A methodology is also given on how to use intensity scaling, color, and composition to create contrasts in an image that highlight the scientific detail. Examples of image creation are discussed.

  11. Image processing for improved eye-tracking accuracy (United States)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)


    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  12. Anomalous diffusion process applied to magnetic resonance image enhancement (United States)

    Senra Filho, A. C. da S.; Garrido Salmon, C. E.; Murta Junior, L. O.


    Diffusion process is widely applied to digital image enhancement both directly introducing diffusion equation as in anisotropic diffusion (AD) filter, and indirectly by convolution as in Gaussian filter. Anomalous diffusion process (ADP), given by a nonlinear relationship in diffusion equation and characterized by an anomalous parameters q, is supposed to be consistent with inhomogeneous media. Although classic diffusion process is widely studied and effective in various image settings, the effectiveness of ADP as an image enhancement is still unknown. In this paper we proposed the anomalous diffusion filters in both isotropic (IAD) and anisotropic (AAD) forms for magnetic resonance imaging (MRI) enhancement. Filters based on discrete implementation of anomalous diffusion were applied to noisy MRI T2w images (brain, chest and abdominal) in order to quantify SNR gains estimating the performance for the proposed anomalous filter when realistic noise is added to those images. Results show that for images containing complex structures, e.g. brain structures, anomalous diffusion presents the highest enhancements when compared to classical diffusion approach. Furthermore, ADP presented a more effective enhancement for images containing Rayleigh and Gaussian noise. Anomalous filters showed an ability to preserve anatomic edges and a SNR improvement of 26% for brain images, compared to classical filter. In addition, AAD and IAD filters showed optimum results for noise distributions that appear on extreme situations on MRI, i.e. in low SNR images with approximate Rayleigh noise distribution, and for high SNR images with Gaussian or non central χ noise distributions. AAD and IAD filter showed the best results for the parametric range 1.2 MRI. This study indicates the proposed anomalous filters as promising approaches in qualitative and quantitative MRI enhancement.

  13. Anomalous diffusion process applied to magnetic resonance image enhancement. (United States)

    Senra Filho, A C da S; Salmon, C E Garrido; Murta Junior, L O


    Diffusion process is widely applied to digital image enhancement both directly introducing diffusion equation as in anisotropic diffusion (AD) filter, and indirectly by convolution as in Gaussian filter. Anomalous diffusion process (ADP), given by a nonlinear relationship in diffusion equation and characterized by an anomalous parameters q, is supposed to be consistent with inhomogeneous media. Although classic diffusion process is widely studied and effective in various image settings, the effectiveness of ADP as an image enhancement is still unknown. In this paper we proposed the anomalous diffusion filters in both isotropic (IAD) and anisotropic (AAD) forms for magnetic resonance imaging (MRI) enhancement. Filters based on discrete implementation of anomalous diffusion were applied to noisy MRI T2w images (brain, chest and abdominal) in order to quantify SNR gains estimating the performance for the proposed anomalous filter when realistic noise is added to those images. Results show that for images containing complex structures, e.g. brain structures, anomalous diffusion presents the highest enhancements when compared to classical diffusion approach. Furthermore, ADP presented a more effective enhancement for images containing Rayleigh and Gaussian noise. Anomalous filters showed an ability to preserve anatomic edges and a SNR improvement of 26% for brain images, compared to classical filter. In addition, AAD and IAD filters showed optimum results for noise distributions that appear on extreme situations on MRI, i.e. in low SNR images with approximate Rayleigh noise distribution, and for high SNR images with Gaussian or non central χ noise distributions. AAD and IAD filter showed the best results for the parametric range 1.2 < q < 1.6, suggesting that the anomalous diffusion regime is more suitable for MRI. This study indicates the proposed anomalous filters as promising approaches in qualitative and quantitative MRI enhancement.

  14. Integrating digital topology in image-processing libraries. (United States)

    Lamy, Julien


    This paper describes a method to integrate digital topology informations in image-processing libraries. This additional information allows a library user to write algorithms respecting topological constraints, for example, a seed fill or a skeletonization algorithm. As digital topology is absent from most image-processing libraries, such constraints cannot be fulfilled. We describe and give code samples for all the structures necessary for this integration, and show a use case in the form of a homotopic thinning filter inside ITK. The obtained filter can be up to a hundred times as fast as ITK's thinning filter and works for any image dimension. This paper mainly deals of integration within ITK, but can be adapted with only minor modifications to other image-processing libraries.

  15. Effects of processing conditions on mammographic image quality. (United States)

    Braeuning, M P; Cooper, H W; O'Brien, S; Burns, C B; Washburn, D B; Schell, M J; Pisano, E D


    Any given mammographic film will exhibit changes in sensitometric response and image resolution as processing variables are altered. Developer type, immersion time, and temperature have been shown to affect the contrast of the mammographic image and thus lesion visibility. The authors evaluated the effect of altering processing variables, including film type, developer type, and immersion time, on the visibility of masses, fibrils, and speaks in a standard mammographic phantom. Images of a phantom obtained with two screen types (Kodak Min-R and Fuji) and five film types (Kodak Min-R M, Min-R E, Min-R H; Fuji UM-MA HC, and DuPont Microvision-C) were processed with five different developer chemicals (Autex SE, DuPont HSD, Kodak RP, Picker 3-7-90, and White Mountain) at four different immersion times (24, 30, 36, and 46 seconds). Processor chemical activity was monitored with sensitometric strips, and developer temperatures were continuously measured. The film images were reviewed by two board-certified radiologists and two physicists with expertise in mammography quality control and were scored based on the visibility of calcifications, masses, and fibrils. Although the differences in the absolute scores were not large, the Kodak Min-R M and Fuji films exhibited the highest scores, and images developed in White Mountain and Autex chemicals exhibited the highest scores. For any film, several processing chemicals may be used to produce images of similar quality. Extended processing may no longer be necessary.

  16. Digital Signal Processing for Medical Imaging Using Matlab

    CERN Document Server

    Gopi, E S


    This book describes medical imaging systems, such as X-ray, Computed tomography, MRI, etc. from the point of view of digital signal processing. Readers will see techniques applied to medical imaging such as Radon transformation, image reconstruction, image rendering, image enhancement and restoration, and more. This book also outlines the physics behind medical imaging required to understand the techniques being described. The presentation is designed to be accessible to beginners who are doing research in DSP for medical imaging. Matlab programs and illustrations are used wherever possible to reinforce the concepts being discussed.  ·         Acts as a “starter kit” for beginners doing research in DSP for medical imaging; ·         Uses Matlab programs and illustrations throughout to make content accessible, particularly with techniques such as Radon transformation and image rendering; ·         Includes discussion of the basic principles behind the various medical imaging tec...

  17. "Robinsona Kruzo" tulkojumu salīdzinājums


    Livdāne, Ieva


    Daiļliteratūras tulkošanā tulkotājam ir dota nosacīta brīvība, lai veiksmīgi adaptētu tekstu mērķauditorijai, taču ir svarīgi saglabāt daiļdarba autora stilu un vēstījumu. Daniela Defo romāns ‘Robinsons Kruzo’ ir viens no nozīmīgākajiem darbiem pasaules literatūrā un tas ir tulkots vairākās valodās, ieskaitot latviešu. Šī darba mērķis ir salīdzināt divus Daniela Defo romāna ‘Robinsons Kruzo’ latviešu tulkojumus. Izmantotās pētīšanas metodes ir literatūras apskats un divu romāna tulkojumu latv...

  18. O sal na alimentação dos jovens : avaliação e perceção do sal das refeições


    Viegas, Cláudia Alexandra Colaço Lourenço


    RESUMO: Considerando que a pressão arterial elevada constitui um dos maiores fatores de risco para as doenças cardiovasculares, a sua associação ao consumo elevado de sal, e o facto das escolas constituírem ambientes de excelência para a aquisição de bons hábitos alimentes e promoção da saúde, o objetivo deste estudo foi avaliar o conteúdo de sal presente nas refeições escolares e a perceção dos consumidores sobre o sabor salgado. A quantificação de sal foi realizada com um med...

  19. An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images. (United States)

    Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong


    Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.

  20. Digital image processing for photo-reconnaissance applications (United States)

    Billingsley, F. C.


    Digital image-processing techniques developed for processing pictures from NASA space vehicles are analyzed in terms of enhancement, quantitative restoration, and information extraction. Digital filtering, and the action of a high frequency filter in the real and Fourier domain are discussed along with color and brightness.

  1. Image processing system performance prediction and product quality evaluation (United States)

    Stein, E. K.; Hammill, H. B. (Principal Investigator)


    The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.

  2. Digital image processing using parallel computing based on CUDA technology (United States)

    Skirnevskiy, I. P.; Pustovit, A. V.; Abdrashitova, M. O.


    This article describes expediency of using a graphics processing unit (GPU) in big data processing in the context of digital images processing. It provides a short description of a parallel computing technology and its usage in different areas, definition of the image noise and a brief overview of some noise removal algorithms. It also describes some basic requirements that should be met by certain noise removal algorithm in the projection to computer tomography. It provides comparison of the performance with and without using GPU as well as with different percentage of using CPU and GPU.

  3. Computer image processing - The Viking experience. [digital enhancement techniques (United States)

    Green, W. B.


    Computer processing of digital imagery from the Viking mission to Mars is discussed, with attention given to subjective enhancement and quantitative processing. Contrast stretching and high-pass filtering techniques of subjective enhancement are described; algorithms developed to determine optimal stretch and filtering parameters are also mentioned. In addition, geometric transformations to rectify the distortion of shapes in the field of view and to alter the apparent viewpoint of the image are considered. Perhaps the most difficult problem in quantitative processing of Viking imagery was the production of accurate color representations of Orbiter and Lander camera images.

  4. IDP: Image and data processing (software) in C++

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S. [Lawrence Livermore National Lab., CA (United States)


    IDP++(Image and Data Processing in C++) is a complied, multidimensional, multi-data type, signal processing environment written in C++. It is being developed within the Radar Ocean Imaging group and is intended as a partial replacement for View. IDP++ takes advantage of the latest object-oriented compiler technology to provide `information hiding.` Users need only know C, not C++. Signals are treated like any other variable with a defined set of operators and functions in an intuitive manner. IDP++ is being designed for real-time environment where interpreted signal processing packages are less efficient.

  5. Performance Measure as Feedback Variable in Image Processing

    Directory of Open Access Journals (Sweden)

    Ristić Danijela


    Full Text Available This paper extends the view of image processing performance measure presenting the use of this measure as an actual value in a feedback structure. The idea behind is that the control loop, which is built in that way, drives the actual feedback value to a given set point. Since the performance measure depends explicitly on the application, the inclusion of feedback structures and choice of appropriate feedback variables are presented on example of optical character recognition in industrial application. Metrics for quantification of performance at different image processing levels are discussed. The issues that those metrics should address from both image processing and control point of view are considered. The performance measures of individual processing algorithms that form a character recognition system are determined with respect to the overall system performance.

  6. Een onwrikbaar geloof in zijn gelijk. Sal Tas (1905–1976) Journalist van de wereld

    NARCIS (Netherlands)

    Van der Hoeven, Pien


    textabstractBook review of Tity de Vries. Een onwrikbaar geloof in zijn gelijk. Sal Tas (1905–1976) Journalist van de wereld. Aspekt, 2015, 465 pp. ISBN 9789461536792.DOI: 10.18146/2213-7653.2017.291

  7. Application of the cycloSal-prodrug approach for improving the biological potential of phosphorylated biomolecules. (United States)

    Meier, C; Balzarini, J


    Pronucleotides represent a promising tool to improve the biological activity of nucleoside analogs in antiviral and cancer chemotherapy. The cycloSal-approach is one of several conceptually different pronucleotide systems. This approach can be applied to various nucleoside analogs. A salicyl alcohol as a cyclic bifunctional masking unit is used, and shown to afford a chemically driven release of the particular nucleotide from the lipophilic phosphate triester precursor molecule. A conceptual extension of the cycloSal-approach results in the design of "lock-in"-cycloSal-derivatives. The cycloSal-approach is not restricted to the delivery of bioactive nucleotides but also useful for the intracellular delivery of hexose-1-phosphates.

  8. Enhancement of structure images of interstellar diamond microcrystals by image processing (United States)

    O'Keefe, Michael A.; Hetherington, Crispin; Turner, John; Blake, David; Freund, Friedemann


    Image processed high resolution TEM images of diamond crystals found in oxidized acid residues of carbonaceous chondrites are presented. Two models of the origin of the diamonds are discussed. The model proposed by Lewis et al. (1987) supposes that the diamonds formed under low pressure conditions, whereas that of Blake et al (1988) suggests that the diamonds formed due to particle-particle collisions behind supernova shock waves. The TEM images of the diamond presented support the high pressure model.

  9. Plastic fats from sal, mango and palm oil by lipase catalyzed interesterification


    Shankar Shetty, Umesha; Sunki Reddy, Yella Reddy; Khatoon, Sakina


    Speciality plastic fats with no trans fatty acids suitable for use in bakery and as vanaspati substitute were prepared by interesterification of blends of palm stearin (PSt) with sal and mango fats using Lipozyme TLIM lipase as catalyst. The blends containing PSt/sal or PSt/mango showed short melting range and hence are not suitable as bakery shortenings. Lipase catalysed interesterification extended the plasticity or melting range of all the blends. The blends containing higher proportion of...

  10. Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing. (United States)

    Kim, Hyunjun; Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu; Sim, Sung-Han


    Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%.

  11. Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing

    Directory of Open Access Journals (Sweden)

    Hyunjun Kim


    Full Text Available Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%.

  12. Digital image processing on a small computer system (United States)

    Danielson, R.


    A minicomputer-based image processing facility provides a relatively low-cost entry point for education about image analysis applications in remote sensing. While a minicomputer has sufficient processing power to produce results quite rapidly for low volumes of small images, it does not have sufficient power to perform CPU- or 1/0-bound tasks on large images. A system equipped with a display terminal is ideally suited for interactive tasks. Software procurement is a limiting factor for most end users, and software availability may well be the overriding consideration in selecting a particular hardware configuration. The hardware chosen should be selected to be compatible with the software and with concern for future expansion.

  13. Image processing techniques in 3-D foot shape measurement system (United States)

    Liu, Guozhong; Li, Ping; Wang, Boxiong; Shi, Hui; Luo, Xiuzhi


    The 3-D foot-shape measurement system based on laser-line-scanning principle was designed and 3-D foot-shape measurements without blind areas and the automatic extraction of foot-parameters were achieved. The paper is focused on the study of the system structure and principle and image processing techniques. The key techniques related to the image processing for 3-D foot shape measurement system include laser stripe extraction, laser stripe coordinate transformation from CCD cameras image coordinates system to laser plane coordinates system, laser stripe assembly of eight CCD cameras and eliminating of image noise and disturbance. 3-D foot shape measurement makes it possible to realize custom-made shoe-making and shows great prosperity in shoe design, foot orthopaedic treatment, shoe size standardization and establishment of a feet database for consumers.

  14. Automatic detection of NIL defects using microscopy and image processing

    KAUST Repository

    Pietroy, David


    Nanoimprint Lithography (NIL) is a promising technology for low cost and large scale nanostructure fabrication. This technique is based on a contact molding-demolding process, that can produce number of defects such as incomplete filling, negative patterns, sticking. In this paper, microscopic imaging combined to a specific processing algorithm is used to detect numerically defects in printed patterns. Results obtained for 1D and 2D imprinted gratings with different microscopic image magnifications are presented. Results are independent on the device which captures the image (optical, confocal or electron microscope). The use of numerical images allows the possibility to automate the detection and to compute a statistical analysis of defects. This method provides a fast analysis of printed gratings and could be used to monitor the production of such structures. © 2013 Elsevier B.V. All rights reserved.

  15. Modular Scanning Confocal Microscope with Digital Image Processing. (United States)

    Ye, Xianjun; McCluskey, Matthew D


    In conventional confocal microscopy, a physical pinhole is placed at the image plane prior to the detector to limit the observation volume. In this work, we present a modular design of a scanning confocal microscope which uses a CCD camera to replace the physical pinhole for materials science applications. Experimental scans were performed on a microscope resolution target, a semiconductor chip carrier, and a piece of etched silicon wafer. The data collected by the CCD were processed to yield images of the specimen. By selecting effective pixels in the recorded CCD images, a virtual pinhole is created. By analyzing the image moments of the imaging data, a lateral resolution enhancement is achieved by using a 20 × / NA = 0.4 microscope objective at 532 nm laser wavelength.

  16. Digital Image Processing Techniques to Create Attractive Astronomical Images from Research Data (United States)

    Rector, T. A.; Levay, Z.; Frattare, L.; English, J.; Pu'uohau-Pummill, K.


    The quality of modern astronomical data, the power of modern computers and the agility of current image processing software enable the creation of high-quality images in a purely digital form that rival the quality of traditional photographic astronomical images. The combination of these technological advancements has created a new ability to make color astronomical images. And in many ways, it has led to a new philosophy towards how to create them. We present a practical guide to generate astronomical images from research data by using powerful image processing programs. These programs use a layering metaphor that allows an unlimited number of astronomical datasets to be combined in any desired color scheme, creating an immense parameter space to be explored using an iterative approach. Several examples of image creation are presented. We also present a philosophy on how to use color and composition to create images that simultaneously highlight the scientific detail within an image and are aesthetically appealing. We advocate an approach that uses visual grammar, defined as the elements which affect the interpretation of an image, to maximize the richness and detail in an image while maintaining scientific accuracy. By properly using visual grammar, one can imply qualities that a two-dimensional image intrinsically cannot show, such as depth, motion and energy. In addition, composition can be used to engage the viewer and keep him or her interested for a longer period of time. The effective use of these techniques can result in a striking image that will effectively convey the science within the image, to scientists and to the public.

  17. Establishing an international reference image database for research and development in medical image processing

    NARCIS (Netherlands)

    Horsch, A.D.; Prinz, M.; Schneider, S.; Sipilä, O; Spinnler, K.; Vallée, J-P; Verdonck-de Leeuw, I; Vogl, R.; Wittenberg, T.; Zahlmann, G.


    INTRODUCTION: The lack of comparability of evaluation results is one of the major obstacles of research and development in Medical Image Processing (MIP). The main reason for that is the usage of different image datasets with different quality, size and Gold standard. OBJECTIVES: Therefore, one of

  18. MATLAB-based Applications for Image Processing and Image Quality Assessment – Part II: Experimental Results

    Directory of Open Access Journals (Sweden)

    L. Krasula


    Full Text Available The paper provides an overview of some possible usage of the software described in the Part I. It contains the real examples of image quality improvement, distortion simulations, objective and subjective quality assessment and other ways of image processing that can be obtained by the individual applications.

  19. Image processing tool for automatic feature recognition and quantification

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xing; Stoddard, Ryan J.


    A system for defining structures within an image is described. The system includes reading of an input file, preprocessing the input file while preserving metadata such as scale information and then detecting features of the input file. In one version the detection first uses an edge detector followed by identification of features using a Hough transform. The output of the process is identified elements within the image.

  20. Assessment of banana fruit maturity by image processing technique


    Surya Prabha, D.; J. Satheesh Kumar


    Maturity stage of fresh banana fruit is an important factor that affects the fruit quality during ripening and marketability after ripening. The ability to identify maturity of fresh banana fruit will be a great support for farmers to optimize harvesting phase which helps to avoid harvesting either under-matured or over-matured banana. This study attempted to use image processing technique to detect the maturity stage of fresh banana fruit by its color and size value of their images precisely...

  1. Detection of pitting corrosion in steel using image processing


    Ghosh, Bidisha; Pakrashi, Vikram; Schoefs, Franck


    This paper presents an image processing based detection method for detecting pitting corrosion in steel structures. High Dynamic Range (HDR) imaging has been carried out in this regard to demonstrate the effectiveness of such relatively inexpensive techniques that are of immense benefit to Non – Destructive – Tesing (NDT) community. The pitting corrosion of a steel sample in marine environment is successfully detected in this paper using the proposed methodology. It is observed, that the prop...

  2. Method development for verification the completeancient statues by image processing

    Directory of Open Access Journals (Sweden)

    Natthariya Laopracha


    Full Text Available Ancient statues are cultural heritages that should be preserved and maintained. Nevertheless, such invaluable statues may be targeted by vandalism or burglary. In order to guard these statues by using image processing, this research aims to develop a technique for detecting images of ancient statues with missing parts using digital image processing. This paper proposed the effective feature extraction method for detecting images of damaged statues or statues with missing parts based on the Histogram Oriented Gradient (HOG technique, a popular method for object detection. Unlike the original HOG technique, the proposed method has improved the area scanning strategy that effectively extracts important features of statues. Results obtained from the proposed method were compared with those of the HOG method. The tested image dataset composed of 500 images of perfect statues and 500 images of statues with missing parts. The experimental results show that the proposed method yields 99.88% accuracy while the original HOG method gives the accuracy of only 84.86%.


    Directory of Open Access Journals (Sweden)

    S. J. Baillarin


    Full Text Available In partnership with the European Commission and in the frame of the Global Monitoring for Environment and Security (GMES program, the European Space Agency (ESA is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. While ensuring data continuity of former SPOT and LANDSAT multi-spectral missions, Sentinel-2 will also offer wide improvements such as a unique combination of global coverage with a wide field of view (290 km, a high revisit (5 days with two satellites, a high resolution (10 m, 20 m and 60 m and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains. In this context, the Centre National d'Etudes Spatiales (CNES supports ESA to define the system image products and to prototype the relevant image processing techniques. This paper offers, first, an overview of the Sentinel-2 system and then, introduces the image products delivered by the ground processing: the Level-0 and Level-1A are system products which correspond to respectively raw compressed and uncompressed data (limited to internal calibration purposes, the Level-1B is the first public product: it comprises radiometric corrections (dark signal, pixels response non uniformity, crosstalk, defective pixels, restoration, and binning for 60 m bands; and an enhanced physical geometric model appended to the product but not applied, the Level-1C provides ortho-rectified top of atmosphere reflectance with a sub-pixel multi-spectral and multi-date registration; a cloud and land/water mask is associated to the product. Note that the cloud mask also provides an indication about cirrus. The ground sampling distance of Level-1C product will be 10 m, 20 m or 60 m according to the band. The final Level-1C product is tiled following a pre-defined grid of 100x100 km2, based on UTM/WGS84 reference frame

  4. Lessons from the masters current concepts in astronomical image processing

    CERN Document Server


    There are currently thousands of amateur astronomers around the world engaged in astrophotography at increasingly sophisticated levels. Their ranks far outnumber professional astronomers doing the same and their contributions both technically and artistically are the dominant drivers of progress in the field today. This book is a unique collaboration of individuals, all world-renowned in their particular area, and covers in detail each of the major sub-disciplines of astrophotography. This approach offers the reader the greatest opportunity to learn the most current information and the latest techniques directly from the foremost innovators in the field today.   The book as a whole covers all types of astronomical image processing, including processing of eclipses and solar phenomena, extracting detail from deep-sky, planetary, and widefield images, and offers solutions to some of the most challenging and vexing problems in astronomical image processing. Recognized chapter authors include deep sky experts su...

  5. An image-processing methodology for extracting bloodstain pattern features. (United States)

    Arthur, Ravishka M; Humburg, Philomena J; Hoogenboom, Jerry; Baiker, Martin; Taylor, Michael C; de Bruin, Karla G


    There is a growing trend in forensic science to develop methods to make forensic pattern comparison tasks more objective. This has generally involved the application of suitable image-processing methods to provide numerical data for identification or comparison. This paper outlines a unique image-processing methodology that can be utilised by analysts to generate reliable pattern data that will assist them in forming objective conclusions about a pattern. A range of features were defined and extracted from a laboratory-generated impact spatter pattern. These features were based in part on bloodstain properties commonly used in the analysis of spatter bloodstain patterns. The values of these features were consistent with properties reported qualitatively for such patterns. The image-processing method developed shows considerable promise as a way to establish measurable discriminating pattern criteria that are lacking in current bloodstain pattern taxonomies. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Image data processing system requirements study. Volume 1: Analysis. [for Earth Resources Survey Program (United States)

    Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.


    Digital image processing, image recorders, high-density digital data recorders, and data system element processing for use in an Earth Resources Survey image data processing system are studied. Loading to various ERS systems is also estimated by simulation.

  7. FlexISP: a flexible camera image processing framework

    KAUST Repository

    Heide, Felix


    Conventional pipelines for capturing, displaying, and storing images are usually defined as a series of cascaded modules, each responsible for addressing a particular problem. While this divide-and-conquer approach offers many benefits, it also introduces a cumulative error, as each step in the pipeline only considers the output of the previous step, not the original sensor data. We propose an end-to-end system that is aware of the camera and image model, enforces natural-image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation (e.g., YUV, DCT). Our system is flexible and we demonstrate it on regular Bayer images as well as images from custom sensors. In all cases, we achieve large improvements in image quality and signal reconstruction compared to state-of-the-art techniques. Finally, we show that our approach is capable of very efficiently handling high-resolution images, making even mobile implementations feasible.

  8. Triple Bioluminescence Imaging for In Vivo Monitoring of Cellular Processes

    Directory of Open Access Journals (Sweden)

    Casey A Maguire


    Full Text Available Bioluminescence imaging (BLI has shown to be crucial for monitoring in vivo biological processes. So far, only dual bioluminescence imaging using firefly (Fluc and Renilla or Gaussia (Gluc luciferase has been achieved due to the lack of availability of other efficiently expressed luciferases using different substrates. Here, we characterized a codon-optimized luciferase from Vargula hilgendorfii (Vluc as a reporter for mammalian gene expression. We showed that Vluc can be multiplexed with Gluc and Fluc for sequential imaging of three distinct cellular phenomena in the same biological system using vargulin, coelenterazine, and D-luciferin substrates, respectively. We applied this triple imaging system to monitor the effect of soluble tumor necrosis factor-related apoptosis-inducing ligand (sTRAIL delivered using an adeno-associated viral vector (AAV on brain tumors in mice. Vluc imaging showed efficient sTRAIL gene delivery to the brain, while Fluc imaging revealed a robust antiglioma therapy. Further, nuclear factor-κB (NF-κB activation in response to sTRAIL binding to glioma cells death receptors was monitored by Gluc imaging. This work is the first demonstration of trimodal in vivo bioluminescence imaging and will have a broad applicability in many different fields including immunology, oncology, virology, and neuroscience.

  9. Natural language processing and visualization in the molecular imaging domain. (United States)

    Tulipano, P Karina; Tao, Ying; Millar, William S; Zanzonico, Pat; Kolbert, Katherine; Xu, Hua; Yu, Hong; Chen, Lifeng; Lussier, Yves A; Friedman, Carol


    Molecular imaging is at the crossroads of genomic sciences and medical imaging. Information within the molecular imaging literature could be used to link to genomic and imaging information resources and to organize and index images in a way that is potentially useful to researchers. A number of natural language processing (NLP) systems are available to automatically extract information from genomic literature. One existing NLP system, known as BioMedLEE, automatically extracts biological information consisting of biomolecular substances and phenotypic data. This paper focuses on the adaptation, evaluation, and application of BioMedLEE to the molecular imaging domain. In order to adapt BioMedLEE for this domain, we extend an existing molecular imaging terminology and incorporate it into BioMedLEE. BioMedLEE's performance is assessed with a formal evaluation study. The system's performance, measured as recall and precision, is 0.74 (95% CI: [.70-.76]) and 0.70 (95% CI [.63-.76]), respectively. We adapt a JAVA viewer known as PGviewer for the simultaneous visualization of images with NLP extracted information.

  10. Blind Image Denoising via Dependent Dirichlet Process Tree. (United States)

    Fengyuan Zhu; Guangyong Chen; Jianye Hao; Pheng-Ann Heng


    Most existing image denoising approaches assumed the noise to be homogeneous white Gaussian distributed with known intensity. However, in real noisy images, the noise models are usually unknown beforehand and can be much more complex. This paper addresses this problem and proposes a novel blind image denoising algorithm to recover the clean image from noisy one with the unknown noise model. To model the empirical noise of an image, our method introduces the mixture of Gaussian distribution, which is flexible enough to approximate different continuous distributions. The problem of blind image denoising is reformulated as a learning problem. The procedure is to first build a two-layer structural model for noisy patches and consider the clean ones as latent variable. To control the complexity of the noisy patch model, this work proposes a novel Bayesian nonparametric prior called "Dependent Dirichlet Process Tree" to build the model. Then, this study derives a variational inference algorithm to estimate model parameters and recover clean patches. We apply our method on synthesis and real noisy images with different noise models. Comparing with previous approaches, ours achieves better performance. The experimental results indicate the efficiency of the proposed algorithm to cope with practical image denoising tasks.

  11. Automated Processing of Zebrafish Imaging Data: A Survey (United States)

    Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A.; Kausler, Bernhard X.; Ledesma-Carbayo, María J.; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine


    Abstract Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines. PMID:23758125

  12. Computer Vision and Image Processing: A Paper Review

    Directory of Open Access Journals (Sweden)

    victor - wiley


    Full Text Available Computer vision has been studied from many persective. It expands from raw data recording into techniques and ideas combining digital image processing, pattern recognition, machine learning and computer graphics. The wide usage has attracted many scholars to integrate with many disciplines and fields. This paper provide a survey of the recent technologies and theoretical concept explaining the development of computer vision especially related to image processing using different areas of their field application. Computer vision helps scholars to analyze images and video to obtain necessary information,    understand information on events or descriptions, and scenic pattern. It used method of multi-range application domain with massive data analysis. This paper provides contribution of recent development on reviews related to computer vision, image processing, and their related studies. We categorized the computer vision mainstream into four group e.g., image processing, object recognition, and machine learning. We also provide brief explanation on the up-to-date information about the techniques and their performance.

  13. Software architecture for intelligent image processing using Prolog (United States)

    Jones, Andrew C.; Batchelor, Bruce G.


    We describe a prototype system for interactive image processing using Prolog, implemented by the first author on an Apple Macintosh computer. This system is inspired by Prolog+, but differs from it in two particularly important respects. The first is that whereas Prolog+ assumes the availability of dedicated image processing hardware, with which the Prolog system communicates, our present system implements image processing functions in software using the C programming language. The second difference is that although our present system supports Prolog+ commands, these are implemented in terms of lower-level Prolog predicates which provide a more flexible approach to image manipulation. We discuss the impact of the Apple Macintosh operating system upon the implementation of the image-processing functions, and the interface between these functions and the Prolog system. We also explain how the Prolog+ commands have been implemented. The system described in this paper is a fairly early prototype, and we outline how we intend to develop the system, a task which is expedited by the extensible architecture we have implemented.

  14. Muscle fiber diameter assessment in cleft lip using image processing. (United States)

    Khan, M F J; Little, J; Abelli, L; Mossey, P A; Autelitano, L; Nag, T C; Rubini, M


    To pilot investigation of muscle fiber diameter (MFD) on medial and lateral sides of the cleft in 18 infants with cleft lip with or without cleft palate (CL/P) using image processing. Formalin-fixed paraffin-embedded (FFPE) tissue samples from the medial and lateral sides of the cleft were analyzed for MFD using an image-processing program (ImageJ). For within-case comparison, a paired Student's t test was performed. For comparisons between classes, an unpaired t test was used. Image processing enabled rapid measurement of MFD with majority of fibers showing diameter between 6 and 11 μm. There was no significant difference in mean MFD between the medial and lateral sides, or between CL and CLP. However, we found a significant difference on the medial side (p = .032) between males and females. The image processing on FFPE tissues resulted in easy quantification of MFD with finding of a smaller MFD on the medial side in males suggesting possible differences in orbicularis oris (OO) muscle between the two sexes in CL that warrants replication using larger number of cases. Moreover, this finding can aid subclinical phenotyping and potentially in the restoration of the anatomy and function of the upper lip. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd. All rights reserved.


    Directory of Open Access Journals (Sweden)

    A. E. Zubarev


    Full Text Available The special modules of photogrammetric processing of remote sensing data that provide the opportunity to effectively organize and optimize the planetary studies were developed. As basic application the commercial software package PHOTOMOD™ is used. Special modules were created to perform various types of data processing: calculation of preliminary navigation parameters, calculation of shape parameters of celestial body, global view image orthorectification, estimation of Sun illumination and Earth visibilities from planetary surface. For photogrammetric processing the different types of data have been used, including images of the Moon, Mars, Mercury, Phobos, Galilean satellites and Enceladus obtained by frame or push-broom cameras. We used modern planetary data and images that were taken over the years, shooting from orbit flight path with various illumination and resolution as well as obtained by planetary rovers from surface. Planetary data image processing is a complex task, and as usual it can take from few months to years. We present our efficient pipeline procedure that provides the possibilities to obtain different data products and supports a long way from planetary images to celestial body maps. The obtained data – new three-dimensional control point networks, elevation models, orthomosaics – provided accurate maps production: a new Phobos atlas (Karachevtseva et al., 2015 and various thematic maps that derived from studies of planetary surface (Karachevtseva et al., 2016a.

  16. Quality Control in Automated Manufacturing Processes – Combined Features for Image Processing

    Directory of Open Access Journals (Sweden)

    B. Kuhlenkötter


    Full Text Available In production processes the use of image processing systems is widespread. Hardware solutions and cameras respectively are available for nearly every application. One important challenge of image processing systems is the development and selection of appropriate algorithms and software solutions in order to realise ambitious quality control for production processes. This article characterises the development of innovative software by combining features for an automatic defect classification on product surfaces. The artificial intelligent method Support Vector Machine (SVM is used to execute the classification task according to the combined features. This software is one crucial element for the automation of a manually operated production process

  17. Image enhancement and denoising by complex diffusion processes. (United States)

    Gilboa, Guy; Sochen, Nir; Zeevi, Yehoshua Y


    The linear and nonlinear scale spaces, generated by the inherently real-valued diffusion equation, are generalized to complex diffusion processes, by incorporating the free Schrödinger equation. A fundamental solution for the linear case of the complex diffusion equation is developed. Analysis of its behavior shows that the generalized diffusion process combines properties of both forward and inverse diffusion. We prove that the imaginary part is a smoothed second derivative, scaled by time, when the complex diffusion coefficient approaches the real axis. Based on this observation, we develop two examples of nonlinear complex processes, useful in image processing: a regularized shock filter for image enhancement and a ramp preserving denoising process.

  18. Introduction to image processing using R learning by examples

    CERN Document Server

    Frery, Alejandro C


    This book introduces the statistical software R to the image processing community in an intuitive and practical manner. R brings interesting statistical and graphical tools which are important and necessary for image processing techniques. Furthermore, it has been proved in the literature that R is among the most reliable, accurate and portable statistical software available. Both the theory and practice of R code concepts and techniques are presented and explained, and the reader is encouraged to try their own implementation to develop faster, optimized programs. Those who are new to the fiel

  19. A new programming metaphor for image processing procedures (United States)

    Smirnov, O. M.; Piskunov, N. E.


    Most image processing systems, besides an Application Program Interface (API) which lets users write their own image processing programs, also feature a higher level of programmability. Traditionally, this is a command or macro language, which can be used to build large procedures (scripts) out of simple programs or commands. This approach, a legacy of the teletypewriter has serious drawbacks. A command language is clumsy when (and if! it attempts to utilize the capabilities of a multitasking or multiprocessor environment, it is but adequate for real-time data acquisition and processing, it has a fairly steep learning curve, and the user interface is very inefficient,. especially when compared to a graphical user interface (GUI) that systems running under Xll or Windows should otherwise be able to provide. ll these difficulties stem from one basic problem: a command language is not a natural metaphor for an image processing procedure. A more natural metaphor - an image processing factory is described in detail. A factory is a set of programs (applications) that execute separate operations on images, connected by pipes that carry data (images and parameters) between them. The programs function concurrently, processing images as they arrive along pipes, and querying the user for whatever other input they need. From the user's point of view, programming (constructing) factories is a lot like playing with LEGO blocks - much more intuitive than writing scripts. Focus is on some of the difficulties of implementing factory support, most notably the design of an appropriate API. It also shows that factories retain all the functionality of a command language (including loops and conditional branches), while suffering from none of the drawbacks outlined above. Other benefits of factory programming include self-tuning factories and the process of encapsulation, which lets a factory take the shape of a standard application both from the system and the user's point of view, and

  20. Interaction between salsolinol (SAL) and thyrotropin-releasing hormone (TRH) or dopamine (DA) on the secretion of prolactin in ruminants. (United States)

    Hashizume, T; Shida, R; Suzuki, S; Kasuya, E; Kuwayama, H; Suzuki, H; Oláh, M; Nagy, G M


    We have recently demonstrated that salsolinol (SAL), a dopamine (DA)-derived compound, is present in the posterior pituitary gland and is able to stimulate the release of prolactin (PRL) in ruminants. The aim of the present study was to clarify the effect that the interaction of SAL with thyrotropin-releasing hormone (TRH) or DA has on the secretion of PRL in ruminants. A single intravenous (i.v.) injection of SAL (5mg/kg body weight (b.w.)), TRH (1microg/kg b.w.), and SAL plus TRH significantly stimulated the release of PRL in goats (PTRH than either SAL or TRH alone, respectively (PTRH (1microg/kg b.w.) significantly stimulated the release of PRL in goats (PTRH than either sulpiride alone or sulpiride plus SAL, respectively (PTRH (10(-8)M), and SAL plus TRH significantly increased the release of PRL (PTRH detected in vivo was not observed in vitro. In contrast, DA (10(-6)M) inhibited the TRH-, as well as SAL-induced PRL release in vitro. All together, these results clearly show that SAL can stimulate the release of PRL in ruminants. Furthermore, they also demonstrate that the additive effect of SAL and TRH on the release of PRL detected in vivo may not be mediated at the level of the AP, but that DA can overcome their releasing activity both in vivo and in vitro, confirming the dominant role of DA in the inhibitory regulation of PRL secretion in ruminants.

  1. Characterisation of SalRAB a salicylic acid inducible positively regulated efflux system of Rhizobium leguminosarum bv viciae 3841.

    Directory of Open Access Journals (Sweden)

    Adrian J Tett

    Full Text Available Salicylic acid is an important signalling molecule in plant-microbe defence and symbiosis. We analysed the transcriptional responses of the nitrogen fixing plant symbiont, Rhizobium leguminosarum bv viciae 3841 to salicylic acid. Two MFS-type multicomponent efflux systems were induced in response to salicylic acid, rmrAB and the hitherto undescribed system salRAB. Based on sequence similarity salA and salB encode a membrane fusion and inner membrane protein respectively. salAB are positively regulated by the LysR regulator SalR. Disruption of salA significantly increased the sensitivity of the mutant to salicylic acid, while disruption of rmrA did not. A salA/rmrA double mutation did not have increased sensitivity relative to the salA mutant. Pea plants nodulated by salA or rmrA strains did not have altered nodule number or nitrogen fixation rates, consistent with weak expression of salA in the rhizosphere and in nodule bacteria. However, BLAST analysis revealed seventeen putative efflux systems in Rlv3841 and several of these were highly differentially expressed during rhizosphere colonisation, host infection and bacteroid differentiation. This suggests they have an integral role in symbiosis with host plants.

  2. An Automatic Image Processing Workflow for Daily Magnetic Resonance Imaging Quality Assurance. (United States)

    Peltonen, Juha I; Mäkelä, Teemu; Sofiev, Alexey; Salli, Eero


    The performance of magnetic resonance imaging (MRI) equipment is typically monitored with a quality assurance (QA) program. The QA program includes various tests performed at regular intervals. Users may execute specific tests, e.g., daily, weekly, or monthly. The exact interval of these measurements varies according to the department policies, machine setup and usage, manufacturer's recommendations, and available resources. In our experience, a single image acquired before the first patient of the day offers a low effort and effective system check. When this daily QA check is repeated with identical imaging parameters and phantom setup, the data can be used to derive various time series of the scanner performance. However, daily QA with manual processing can quickly become laborious in a multi-scanner environment. Fully automated image analysis and results output can positively impact the QA process by decreasing reaction time, improving repeatability, and by offering novel performance evaluation methods. In this study, we have developed a daily MRI QA workflow that can measure multiple scanner performance parameters with minimal manual labor required. The daily QA system is built around a phantom image taken by the radiographers at the beginning of day. The image is acquired with a consistent phantom setup and standardized imaging parameters. Recorded parameters are processed into graphs available to everyone involved in the MRI QA process via a web-based interface. The presented automatic MRI QA system provides an efficient tool for following the short- and long-term stability of MRI scanners.

  3. High Throughput Multispectral Image Processing with Applications in Food Science. (United States)

    Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John


    Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.

  4. 360 degree realistic 3D image display and image processing from real objects (United States)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi


    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  5. Digital image processing of mandibular trabeculae on radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Ogino, Toshi


    The present study was aimed to reveal the texture patterns of the radiographs of the mandibular trabeculae by digital image processing. The 32 cases of normal subjects and the 13 cases of patients with mandibular diseases of ameloblastoma, primordial cysts, squamous cell carcinoma and odontoma were analyzed by their intra-oral radiographs in the right premolar regions. The radiograms were digitized by the use of a drum scanner densitometry method. The input radiographic images were processed by a histogram equalization method. The result are as follows : First, the histogram equalization method enhances the image contrast of the textures. Second, the output images of the textures for normal mandible-trabeculae radiograms are of network pattern in nature. Third, the output images for the patients are characterized by the non-network pattern and replaced by the patterns of the fabric texture, intertwined plants (karakusa-pattern), scattered small masses and amorphous texture. Thus, these results indicates that the present digital image system is expected to be useful for revealing the texture patterns of the radiographs and in the future for the texture analysis of the clinical radiographs to obtain quantitative diagnostic findings.

  6. Raw image processing in for Structure-from-Motion surveying (United States)

    O'Connor, James; Smith, Mike; James, Mike R.


    Consumer-grade cameras are now commonly used within geoscientific topographic surveys and, combined with modern photogrammetric techniques such as Structure-from-Motion (SfM), provide accurate 3-D products for use in a range of research applications. However, the workflows deployed are often treated as "black box" techniques and the image inputs (Quality, exposure conditions and pre-processing thereof) can go under-reported. Differences in how raw sensor data are converted into an image format (that is then used in an SfM workflow) can have an effect on the quality of SfM products. Within this contribution we present results generated from sets of photographs, initially captured as RAW images, of two cliffs in Norfolk, UK, where complex topography provides challenging conditions for accurate 3-D reconstructions using SfM. These RAW image sets were pre-processed in several ways, including the generation of 8 bit-per-channel JPEG and 16 bit-per-channel TIFF files, prior to SfM processing. The resulting point cloud products were compared against a high-resolution Terrestrial Laser Scan (TLS) reference. Results show slight differences in benchmark tests for each image block against the TLS reference data, but metrics within the bundle adjustment suggest a higher internal precision (in terms of RMS reprojection error within the sparse cloud) and more stable solution within the 16 bit-per-channel data.

  7. Parallel-Processing Software for Creating Mosaic Images (United States)

    Klimeck, Gerhard; Deen, Robert; McCauley, Michael; DeJong, Eric


    A computer program implements parallel processing for nearly real-time creation of panoramic mosaics of images of terrain acquired by video cameras on an exploratory robotic vehicle (e.g., a Mars rover). Because the original images are typically acquired at various camera positions and orientations, it is necessary to warp the images into the reference frame of the mosaic before stitching them together to create the mosaic. [Also see "Parallel-Processing Software for Correlating Stereo Images," Software Supplement to NASA Tech Briefs, Vol. 31, No. 9 (September 2007) page 26.] The warping algorithm in this computer program reflects the considerations that (1) for every pixel in the desired final mosaic, a good corresponding point must be found in one or more of the original images and (2) for this purpose, one needs a good mathematical model of the cameras and a good correlation of individual pixels with respect to their positions in three dimensions. The desired mosaic is divided into slices, each of which is assigned to one of a number of central processing units (CPUs) operating simultaneously. The results from the CPUs are gathered and placed into the final mosaic. The time taken to create the mosaic depends upon the number of CPUs, the speed of each CPU, and whether a local or a remote data-staging mechanism is used.

  8. An image-processing program for automated counting (United States)

    Cunningham, D.J.; Anderson, W.H.; Anthony, R.M.


    An image-processing program developed by the National Institute of Health, IMAGE, was modified in a cooperative project between remote sensing specialists at the Ohio State University Center for Mapping and scientists at the Alaska Science Center to facilitate estimating numbers of black brant (Branta bernicla nigricans) in flocks at Izembek National Wildlife Refuge. The modified program, DUCK HUNT, runs on Apple computers. Modifications provide users with a pull down menu that optimizes image quality; identifies objects of interest (e.g., brant) by spectral, morphometric, and spatial parameters defined interactively by users; counts and labels objects of interest; and produces summary tables. Images from digitized photography, videography, and high- resolution digital photography have been used with this program to count various species of waterfowl.

  9. Implied Movement in Static Images Reveals Biological Timing Processing

    Directory of Open Access Journals (Sweden)

    Francisco Carlos Nather


    Full Text Available Visual perception is adapted toward a better understanding of our own movements than those of non-conspecifics. The present study determined whether time perception is affected by pictures of different species by considering the evolutionary scale. Static (“S” and implied movement (“M” images of a dog, cheetah, chimpanzee, and man were presented to undergraduate students. S and M images of the same species were presented in random order or one after the other (S-M or M-S for two groups of participants. Movement, Velocity, and Arousal semantic scales were used to characterize some properties of the images. Implied movement affected time perception, in which M images were overestimated. The results are discussed in terms of visual motion perception related to biological timing processing that could be established early in terms of the adaptation of humankind to the environment.

  10. Image-plane processing for improved computer vision (United States)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.


    The proper combination of optical design with image plane processing, as in the mechanism of human vision, which allows to improve the performance of sensor array imaging systems for edge detection and location was examined. Two dimensional bandpass filtering during image formation, optimizes edge enhancement and minimizes data transmission. It permits control of the spatial imaging system response to tradeoff edge enhancement for sensitivity at low light levels. It is shown that most of the information, up to about 94%, is contained in the signal intensity transitions from which the location of edges is determined for raw primal sketches. Shading the lens transmittance to increase depth of field and using a hexagonal instead of square sensor array lattice to decrease sensitivity to edge orientation improves edge information about 10%.

  11. Computer-aided Image Processing of Angiogenic Histological. (United States)

    Sprindzuk, Matvey; Dmitruk, Alexander; Kovalev, Vassili; Bogush, Armen; Tuzikov, Alexander; Liakhovski, Victor; Fridman, Mikhail


    This article reviews the questions regarding the image evaluation of angiogeneic histological samples, particularly the ovarian epithelial cancer. Review is focused on the principles of image analysis in the field of histology and pathology. The definition, classification, pathogenesis and angiogenesis regulation in the ovaries are also briefly discussed. It is hoped that the complex image analysis together with the patient's clinical parameters will allow an acquiring of a clear pathogenic picture of the disease, extension of the differential diagnosis and become a useful tool for the evaluation of drug effects. The challenge of the assessment of angiogenesis activity is the heterogeneity of several objects: parameters derived from patient's anamnesis as well as of pathology samples. The other unresolved problems are the subjectivity of the region of interest selection and performance of the whole slide scanning. Angiogenesis; Image processing; Microvessel density; Cancer; Pathology.

  12. Quantification of chromatin condensation level by image processing. (United States)

    Irianto, Jerome; Lee, David A; Knight, Martin M


    The level of chromatin condensation is related to the silencing/activation of chromosomal territories and therefore impacts on gene expression. Chromatin condensation changes during cell cycle, progression and differentiation, and is influenced by various physicochemical and epigenetic factors. This study describes a validated experimental technique to quantify chromatin condensation. A novel image processing procedure is developed using Sobel edge detection to quantify the level of chromatin condensation from nuclei images taken by confocal microscopy. The algorithm was developed in MATLAB and used to quantify different levels of chromatin condensation in chondrocyte nuclei achieved through alteration in osmotic pressure. The resulting chromatin condensation parameter (CCP) is in good agreement with independent multi-observer qualitative visual assessment. This image processing technique thereby provides a validated unbiased parameter for rapid and highly reproducible quantification of the level of chromatin condensation. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  13. Personal Computer (PC) based image processing applied to fluid mechanics (United States)

    Cho, Y.-C.; Mclachlan, B. G.


    A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes computation.

  14. Crops Diagnosis Using Digital Image Processing and Precision Agriculture Technologies

    Directory of Open Access Journals (Sweden)

    Andrés Fernando Jiménez López


    Full Text Available This paper presents the results of the design and implementation of a system for capturing and processing images of agricultural crops. The design includes the development of software and hardware for image acquisition using a model helicopter equipped with video cameras with a resolution of 640x480 pixels. A software application was developed for performing differential correction of errors generated by the Global Positioning System (GPS and for allowing the monitoring of the position of the helicopter in real time. A telemetry system consisting of an inertial measurement unit, a magnetometer, a pressure and altitude sensor, one GPS and two photo cameras were developed. Finally, image processing software was developed to determine some vegetation indexes and generation of three-dimensional maps of crops.

  15. Asphalt Mixture Segregation Detection: Digital Image Processing Approach

    Directory of Open Access Journals (Sweden)

    Mohamadtaqi Baqersad


    Full Text Available Segregation determination in the asphalt pavement is an issue causing many disputes between agencies and contractors. The visual inspection method has commonly been used to determine pavement texture and in-place core density test used for verification. Furthermore, laser-based devices, such as the Florida Texture Meter (FTM and the Circular Track Meter (CTM, have recently been developed to evaluate the asphalt mixture texture. In this study, an innovative digital image processing approach is used to determine pavement segregation. In this procedure, the standard deviation of the grayscale image frequency histogram is used to determine segregated regions. Linear Discriminate Analysis (LDA is then implemented on the obtained standard deviations from image processing to classify pavements into the segregated and nonsegregated areas. The visual inspection method is utilized to verify this method. The results have demonstrated that this new method is a robust tool to determine segregated areas in newly paved FC9.5 pavement types.

  16. IPL processing of the Mariner 10 images of Mercury (United States)

    Soha, J. M.; Lynn, D. J.; Lorre, J. J.; Mosher, J. A.; Thayer, N. N.; Elliott, D. A.; Benton, W. D.; Dewar, R. E.


    This paper describes the digital processing performed on the images of Mercury returned to earth from Mariner 10. Each image contains considerably more information than can be displayed in a single picture. Several specialized processing techniques and procedures are utilized to display the particular information desired for specific scientific analyses: radiometric decalibration for photometric investigations, high-pass filtering to characterize morphology, modulation transfer function restoration to provide the highest possible resolution, scene-dependent filtering of the terminator images to provide maximum feature discriminability in the regions of low illumination, and rectification to cartographic projections to provide known geometric relationships between features. A principal task was the construction of full disk mosaics as an aid to the understanding of surface structure on a global scale.

  17. Collection and processing data for high quality CCD images.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin Walter


    Coherent Change Detection (CCD) with Synthetic Aperture Radar (SAR) images is a technique whereby very subtle temporal changes can be discerned in a target scene. However, optimal performance requires carefully matching data collection geometries and adjusting the processing to compensate for imprecision in the collection geometries. Tolerances in the precision of the data collection are discussed, and anecdotal advice is presented for optimum CCD performance. Processing considerations are also discussed.

  18. Automatic DNA Diagnosis for 1D Gel Electrophoresis Images using Bio-image Processing Technique. (United States)

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Shaw, Philip J; Ukosakit, Kittipat; Tragoonrung, Somvong; Tongsima, Sissades


    DNA gel electrophoresis is a molecular biology technique for separating different sizes of DNA fragments. Applications of DNA gel electrophoresis include DNA fingerprinting (genetic diagnosis), size estimation of DNA, and DNA separation for Southern blotting. Accurate interpretation of DNA banding patterns from electrophoretic images can be laborious and error prone when a large number of bands are interrogated manually. Although many bio-imaging techniques have been proposed, none of them can fully automate the typing of DNA owing to the complexities of migration patterns typically obtained. We developed an image-processing tool that automatically calls genotypes from DNA gel electrophoresis images. The image processing workflow comprises three main steps: 1) lane segmentation, 2) extraction of DNA bands and 3) band genotyping classification. The tool was originally intended to facilitate large-scale genotyping analysis of sugarcane cultivars. We tested the proposed tool on 10 gel images (433 cultivars) obtained from polyacrylamide gel electrophoresis (PAGE) of PCR amplicons for detecting intron length polymorphisms (ILP) on one locus of the sugarcanes. These gel images demonstrated many challenges in automated lane/band segmentation in image processing including lane distortion, band deformity, high degree of noise in the background, and bands that are very close together (doublets). Using the proposed bio-imaging workflow, lanes and DNA bands contained within are properly segmented, even for adjacent bands with aberrant migration that cannot be separated by conventional techniques. The software, called GELect, automatically performs genotype calling on each lane by comparing with an all-banding reference, which was created by clustering the existing bands into the non-redundant set of reference bands. The automated genotype calling results were verified by independent manual typing by molecular biologists. This work presents an automated genotyping tool from DNA

  19. Edge detection - Image-plane versus digital processing (United States)

    Huck, Friedrich O.; Fales, Carl L.; Park, Stephen K.; Triplett, Judith A.


    To optimize edge detection with the familiar Laplacian-of-Gaussian operator, it has become common to implement this operator with a large digital convolution mask followed by some interpolation of the processed data to determine the zero crossings that locate edges. It is generally recognized that this large mask causes substantial blurring of fine detail. It is shown that the spatial detail can be improved by a factor of about four with either the Wiener-Laplacian-of-Gaussian filter or an image-plane processor. The Wiener-Laplacian-of-Gaussian filter minimizes the image-gathering degradations if the scene statistics are at least approximately known and also serves as an interpolator to determine the desired zero crossings directly. The image-plane processor forms the Laplacian-of-Gaussian response by properly combining the optical design of the image-gathering system with a minimal three-by-three lateral-inhibitory processing mask. This approach, which is suggested by Marr's model of early processing in human vision, also reduces data processing by about two orders of magnitude and data transmission by up to an order of magnitude.

  20. Detection of Optimum Maturity of Maize Using Image Processing

    African Journals Online (AJOL)

    Ayuba et al.


    Apr 13, 2017 ... Detection of Optimum Maturity of Maize Using Image Processing and Artificial. Neural Network. DETECTION OF ... (MATLAB) and used as inputs to the artificial neural network that classify different levels of maturity. ... differences in vision, human weariness factors and intuition differences concerning crops ...

  1. Application of digital image processing for pot plant grading

    NARCIS (Netherlands)

    Dijkstra, J.


    The application of digital image processing for grading of pot plants has been studied. Different techniques e.q. plant part identification based on knowledge based segmentation, have been developed to measure features of plants in different growth stage. Growth experiments were performed

  2. Application of digital image processing techniques to astronomical imagery 1978 (United States)

    Lorre, J. J.


    Techniques for using image processing in astronomy are identified and developed for the following: (1) geometric and radiometric decalibration of vidicon-acquired spectra, (2) automatic identification and segregation of stars from galaxies; and (3) display of multiband radio maps in compact and meaningful formats. Examples are presented of these techniques applied to a variety of objects.

  3. Digital Image Processing application to spray and flammability studies (United States)

    Hernan, M. A.; Parikh, P.; Sarohia, V.


    Digital Image Processing has been integrated into a new technique for measurements of fuel spray characteristics. The advantages of this technique are: a wide dynamic range of droplet sizes, accounting for nonspherical droplet shapes not possible with other spray assessment techniques. Finally, the technique has been applied to the study of turbojet engine fuel nozzle atomization performance with Jet A and antimisting fuel.

  4. Techniques and software architectures for medical visualisation and image processing

    NARCIS (Netherlands)

    Botha, C.P.


    This thesis presents a flexible software platform for medical visualisation and image processing, a technique for the segmentation of the shoulder skeleton from CT data and three techniques that make contributions to the field of direct volume rendering. Our primary goal was to investigate the use

  5. Transportation informatics : advanced image processing techniques automated pavement distress evaluation. (United States)


    The current project, funded by MIOH-UTC for the period 1/1/2009- 4/30/2010, is concerned : with the development of the framework for a transportation facility inspection system using : advanced image processing techniques. The focus of this study is ...

  6. Simulation of the perpendicular recording process including image charge effects

    NARCIS (Netherlands)

    Beusekamp, M.F.; Fluitman, J.H.J.


    This paper presents a complete model for the perpendicular recording process in single-pole-head keeper-layer configurations. It includes the influence of the image-charge distributions in the head and the keeper layer. Based on calculations of magnetization distributions in standstill situations,

  7. Multiresolution approach to processing images for different applications interaction of lower processing with higher vision

    CERN Document Server

    Vujović, Igor


    This book presents theoretical and practical aspects of the interaction between low and high level image processing. Multiresolution analysis owes its popularity mostly to wavelets and is widely used in a variety of applications. Low level image processing is important for the performance of many high level applications. The book includes examples from different research fields, i.e. video surveillance; biomedical applications (EMG and X-ray); improved communication, namely teleoperation, telemedicine, animation, augmented/virtual reality and robot vision; monitoring of the condition of ship systems and image quality control.

  8. Model control of image processing for telerobotics and biomedical instrumentation (United States)

    Nguyen, An Huu


    This thesis has model control of image processing (MCIP) as its major theme. By this it is meant that there is a top-down model approach which already knows the structure of the image to be processed. This top-down image processing under model control is used further as visual feedback to control robots and as feedforward information for biomedical instrumentation. The software engineering of the bioengineering instrumentation image processing is defined in terms of the task and the tools available. Early bottom-up image processing such as thresholding occurs only within the top-down control regions of interest (ROI's) or operating windows. Moment computation is an important bottom-up procedure as well as pyramiding to attain rapid computation, among other considerations in attaining programming efficiencies. A distinction is made between initialization procedures and stripped down run time operations. Even more detailed engineering design considerations are considered with respect to the ellipsoidal modeling of objects. Here the major axis orientation is an important additional piece of information, beyond the centroid moments. Careful analysis of various sources of errors and considerable benchmarking characterized the detailed considerations of the software engineering of the image processing procedures. Image processing for robotic control involves a great deal of 3D calibration of the robot working environment (RWE). Of special interest is the idea of adapting the machine scanpath to the current task. It was important to pay careful attention to the hardware aspects of the control of the toy robots that were used to demonstrate the general methodology. It was necessary to precalibrate the open loop gains for all motors so that after initialization the visual feedback, which depends on MCIP, would be able to supply enough information quickly enough to the control algorithms to govern the robots under a variety of control configurations and task operations

  9. Performance assessment of a data processing chain for THz imaging (United States)

    Catapano, Ilaria; Ludeno, Giovanni; Soldovieri, Francesco


    Nowadays, TeraHertz (THz) imaging is deserving huge attention as very high resolution diagnostic tool in many applicative fields, among which security, cultural heritage, material characterization and civil engineering diagnostics. This widespread use of THz waves is due to their non-ionizing nature, their capability of penetrating into non-metallic opaque materials, as well as to the technological advances, which have allowed the commercialization of compact, flexible and portable systems. However, the effectiveness of THz imaging depends strongly on the adopted data processing aimed at improving the imaging performance of the hardware device. In particular, data processing is required to mitigate detrimental and unavoidable effects like noise, signal attenuation, as well as to correct the sample surface topography. With respect to data processing, we have proposed recently a strategy involving three different steps aimed at reducing noise, filtering out undesired signal introduced by the adopted THz system and performing surface topography correction [1]. The first step regards noise filtering and exploits a procedure based on the Singular Value Decomposition (SVD) [2] of the data matrix, which does not require knowledge of noise level and it does not involve the use of a reference signal. The second step aims at removing the undesired signal that we have experienced to be introduced by the adopted Z-Omega Fiber-Coupled Terahertz Time Domain (FICO) system. Indeed, when the system works in a high-speed mode, an undesired low amplitude peak occurs always at the same time instant from the beginning of the observation time window and needs to be removed from the useful data matrix in order to avoid a wrong interpretation of the imaging results. The third step of the considered data processing chain is a topographic correction, which needs in order to image properly the samples surface and its inner structure. Such a procedure performs an automatic alignment of the

  10. [Body image as a process or object and body satisfaction]. (United States)

    Zarek, Aleksandra


    This work focused on categorization of elements of body image viewed as an object or process, as well as on the relationship between body satisfaction and manner of perceiving the body. The study was carried out in 177 subjects aged 19 to 53 years (148 females and 29 males). Body image was measured with the Body Image Questionnaire based on the Body Cathexis Scale of P.F. Secord and S.J. Jourard. Participation was anonymous. The procedure of attributing an element of the body to the function scale or body parts scale was based on the method described by S. Franzoi. Elements of body image recognized as body parts were characterized in the context of appearance (static object), while elements of body image recognized as body functions were considered in the context of operation (dynamic process). This relationship, however, was not symmetrical as elements of the body not characterized as body functions could also be evaluated in the context of operation. The level of body satisfaction was associated with perception of an element of the body in the aspect of appearance or operation, whereas its perception as body part or body function was of lesser importance.

  11. Interactive image segmentation using Dirichlet process multiple-view learning. (United States)

    Ding, Lei; Yilmaz, Alper; Yan, Rong


    Segmenting semantically meaningful whole objects from images is a challenging problem, and it becomes especially so without higher level common sense reasoning. In this paper, we present an interactive segmentation framework that integrates image appearance and boundary constraints in a principled way to address this problem. In particular, we assume that small sets of pixels, which are referred to as seed pixels, are labeled as the object and background. The seed pixels are used to estimate the labels of the unlabeled pixels using Dirichlet process multiple-view learning, which leverages 1) multiple-view learning that integrates appearance and boundary constraints and 2) Dirichlet process mixture-based nonlinear classification that simultaneously models image features and discriminates between the object and background classes. With the proposed learning and inference algorithms, our segmentation framework is experimentally shown to produce both quantitatively and qualitatively promising results on a standard dataset of images. In particular, our proposed framework is able to segment whole objects from images given insufficient seeds.

  12. Special Aspects of Sensual Images During Imago Therapy Process

    Directory of Open Access Journals (Sweden)

    Vachkov I.V.


    Full Text Available The article presents the results of the study performed on 27 adults who have completed five imago therapeutic sessions. The subjects were split in two groups: problem-solving group and problem-analysis group. The peculiarities of sensory images that arise during the sessions by Glezer grounded theory method were studied. It turned out that the subject sensual tissue of the psychosemiological tetrahedron (the term of F. Vasiljuk of the imago therapeutic image is represented mainly by sensory imaginary events and less frequently by sensual objects and recalled events. The pole and sensory tissue of meaning are least expressed in the imago therapeutic image. The pole and sensory tissue of personal meaning are significantly expressed by the imago therapeutic image. Not only elements relating to the imagination are woven into the imago therapeutic image but also elements reflecting the process of imaginative psychotherapy as a whole, a separate session, the process of imagination. These elements are closely related to the elements of imaginary and remembered events, their feelings and comprehension.

  13. Smokers exhibit biased neural processing of smoking and affective images. (United States)

    Oliver, Jason A; Jentink, Kade G; Drobes, David J; Evans, David E


    There has been growing interest in the role that implicit processing of drug cues can play in motivating drug use behavior. However, the extent to which drug cue processing biases relate to the processing biases exhibited to other types of evocative stimuli is largely unknown. The goal of the present study was to determine how the implicit cognitive processing of smoking cues relates to the processing of affective cues using a novel paradigm. Smokers (n = 50) and nonsmokers (n = 38) completed a picture-viewing task, in which participants were presented with a series of smoking, pleasant, unpleasant, and neutral images while engaging in a distractor task designed to direct controlled resources away from conscious processing of image content. Electroencephalogram recordings were obtained throughout the task for extraction of event-related potentials (ERPs). Smokers exhibited differential processing of smoking cues across 3 different ERP indices compared with nonsmokers. Comparable effects were found for pleasant cues on 2 of these indices. Late cognitive processing of smoking and pleasant cues was associated with nicotine dependence and cigarette use. Results suggest that cognitive biases may extend across classes of stimuli among smokers. This raises important questions about the fundamental meaning of cognitive biases, and suggests the need to consider generalized cognitive biases in theories of drug use behavior and interventions based on cognitive bias modification. (PsycINFO Database Record (c) 2016 APA, all rights reserved).


    Roth, D. J.


    IMAGEP is a FORTRAN computer algorithm containing various image processing, analysis, and enhancement functions. It is a keyboard-driven program organized into nine subroutines. Within the subroutines are other routines, also, selected via keyboard. Some of the functions performed by IMAGEP include digitization, storage and retrieval of images; image enhancement by contrast expansion, addition and subtraction, magnification, inversion, and bit shifting; display and movement of cursor; display of grey level histogram of image; and display of the variation of grey level intensity as a function of image position. This algorithm has possible scientific, industrial, and biomedical applications in material flaw studies, steel and ore analysis, and pathology, respectively. IMAGEP is written in VAX FORTRAN for DEC VAX series computers running VMS. The program requires the use of a Grinnell 274 image processor which can be obtained from Mark McCloud Associates, Campbell, CA. An object library of the required GMR series software is included on the distribution media. IMAGEP requires 1Mb of RAM for execution. The standard distribution medium for this program is a 1600 BPI 9track magnetic tape in VAX FILES-11 format. It is also available on a TK50 tape cartridge in VAX FILES-11 format. This program was developed in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation.

  15. Adli Otopsilerde Rastlantısal Tiroit Patolojileri

    Directory of Open Access Journals (Sweden)

    Gülden Çengel


    Full Text Available Amaç: Otopsilerde en sık karşılaşılan rastlantısal lezyonların arasında tiroit hiperplazilerinin olduğu ve gizli tiroit mikrokarsinomaları ile de karşılaşıldığı bilinmektedir. Çalışmamızda, bölgemizde otopsileri yapılarak ölümün medikolegal yönden değerlendirildiği olgularda, tiroit örneklemesi yapılabilen çocuk ve erişkin yaş grubundaki tiroit lezyonlarının ortaya konulması amaçlandı. Yöntem: Bu çalışmada, Nisan 2009- Nisan 2010 tarihleri arasında İzmir Adli Tıp Kurumu Morg İhtisas Dairesi’nde adli otopsileri yapılan olgulardan prospektif olarak tiroit bezinden doku örneği alınması planlanmıştır. Çalışma süresince kokuşma bulguları gözlenmeyen 210 olgunun tiroit örneği alınmıştır. Çalışmamızda, otopsileri yapılan adli olguların; yaş, cinsiyet, ölüm nedenleri, ölüm orijinleri,  tiroit bezi ağırlıkları ve morfolojisi, tiroit patolojileri ve tanımlanan lezyonun primer ölüm nedeni ya da ölümle potansiyel ilişkisi olup olmadığı araştırılmıştır. Veriler Windows SPSS 11.0 bilgisayar programında değerlendirilmişir.   Bulgular: 210 olgunun yaş ortalaması 49,44±18,25 olup, olguların %76,7’si (n=161 erkek idi. İncelenen doku örneklerinde; tiroit organ ağırlığı ortalaması 40,71±27,95 gr. olarak ölçüldü ve 96 olguda histopatolojik inceleme sonucu bir lezyona rastlandı. İyot alımının yetersiz olmadığı kabul edilen bölgemizde, adli otopsilerde örnek alınan olgularda tiroit bezi patolojilerinin prevalansının % 45 olduğu gözlendi. Tiroit bezi ağırlıklarının yaş ile hafif düzeyde korelasyon gösterdiği belirlendi (P=0.002 ve r=0.219 ve cinsiyet ile ilişkisinin olmadığı görüldü. En sık rastlanan lezyonlar sırasıyla; noduler hiperplazi (%29,5, lenfositik tiroidit (%5,7 ve Hashimoto tiroiditi (%5,7 olup bir olguda akciğer küçük hücreli kanser metastazı ve bir olguda da konjenital boyun kitlesi

  16. Recent developments at JPL in the application of digital image processing techniques to astronomical images (United States)

    Lorre, J. J.; Lynn, D. J.; Benton, W. D.


    Several techniques of a digital image-processing nature are illustrated which have proved useful in visual analysis of astronomical pictorial data. Processed digital scans of photographic plates of Stephans Quintet and NGC 4151 are used as examples to show how faint nebulosity is enhanced by high-pass filtering, how foreground stars are suppressed by linear interpolation, and how relative color differences between two images recorded on plates with different spectral sensitivities can be revealed by generating ratio images. Analyses are outlined which are intended to compensate partially for the blurring effects of the atmosphere on images of Stephans Quintet and to obtain more detailed information about Saturn's ring structure from low- and high-resolution scans of the planet and its ring system. The employment of a correlation picture to determine the tilt angle of an average spectral line in a low-quality spectrum is demonstrated for a section of the spectrum of Uranus.

  17. Computed tomography image source identification by discriminating CT-scanner image reconstruction process. (United States)

    Duan, Y; Coatrieux, G; Shu, H Z


    In this paper, we focus on the identification of the Computed Tomography (CT) scanner that has produced a CT image. To do so, we propose to discriminate CT-Scanner systems based on their reconstruction process, the footprint or the signature of which can be established based on the way they modify the intrinsic sensor noise of X-ray detectors. After having analyzed how the sensor noise is modified in the reconstruction process, we define a set of image features so as to serve as CT acquisition system footprint. These features are used to train a SVM based classifier. Experiments conducted on images issued from 15 different CT-Scanner models of 4 distinct manufacturers show it is possible to identify the origin of one CT image with high accuracy.

  18. Automatic calculation of tree diameter from stereoscopic image pairs using digital image processing. (United States)

    Yi, Faliu; Moon, Inkyu


    Automatic operations play an important role in societies by saving time and improving efficiency. In this paper, we apply the digital image processing method to the field of lumbering to automatically calculate tree diameters in order to reduce culler work and enable a third party to verify tree diameters. To calculate the cross-sectional diameter of a tree, the image was first segmented by the marker-controlled watershed transform algorithm based on the hue saturation intensity (HSI) color model. Then, the tree diameter was obtained by measuring the area of every isolated region in the segmented image. Finally, the true diameter was calculated by multiplying the diameter computed in the image and the scale, which was derived from the baseline and disparity of correspondence points from stereoscopic image pairs captured by rectified configuration cameras.

  19. High-resolution imaging methods in array signal processing

    DEFF Research Database (Denmark)

    Xenaki, Angeliki

    The purpose of this study is to develop methods in array signal processing which achieve accurate signal reconstruction from limited observations resulting in high-resolution imaging. The focus is on underwater acoustic applications and sonar signal processing both in active (transmit and receive...... in active sonar signal processing for detection and imaging of submerged oil contamination in sea water from a deep-water oil leak. The submerged oil _eld is modeled as a uid medium exhibiting spatial perturbations in the acoustic parameters from their mean ambient values which cause weak scattering......-of-arrival (DOA) of the associated wavefronts from a limited number of observations. Usually, there are only a few sources generating the acoustic wavefield such that DOA estimation is essentially a sparse signal reconstruction problem. Conventional methods for DOA estimation (i.e., beamforming) suffer from...


    Directory of Open Access Journals (Sweden)

    Petr Cížek


    Full Text Available In visual navigation tasks, a lack of the computational resources is one of the main limitations of micro robotic platforms to be deployed in autonomous missions. It is because the most of nowadays techniques of visual navigation relies on a detection of salient points that is computationally very demanding. In this paper, an FPGA assisted acceleration of image processing is considered to overcome limitations of computational resources available on-board and to enable high processing speeds while it may lower the power consumption of the system. The paper reports on performance evaluation of the CPU–based and FPGA–based implementations of a visual teach-and-repeat navigation system based on detection and tracking of the FAST image salient points. The results indicate that even a computationally efficient FAST algorithm can benefit from a parallel (low–cost FPGA–based implementation that has a competitive processing time but more importantly it is a more power efficient.

  1. Image Pre-processing in Vertical Traffic Signs Detection System

    Directory of Open Access Journals (Sweden)

    Dávid Solus


    Full Text Available The aim of this paper is to present the first steps of the systems design that will be able to detect vertical traffic signs and to provide descriptions of the applied data processing methods. This system includes various functional blocks that are described in this paper. The basis of Vertical Traffic Signs Detection System is a pre-processing of a captured traffic scene ahead of vehicle. Main part of this paper contains a description of user friendly software interface for an image pre-processing.

  2. Measurement of Stomatal Aperture by Digital Image Processing


    Kenji, Omasa; Morio, Onoe; Division of Engineering The National Institute for Environmental Studies; Institute of Industrial Science, University of Tokyo


    We developed a new digital image processing technique for exactly measuring the degree of stomatal opening, that is, the ratio of the width to the maximum length of a stomatal pore, and the pore area. We applied this technique to evaluate responses to SO_2 of neighboring stomata in a small region of an intact attached leaf, with the following results: 1) The pore region could be exactly extracted even when the original digital image was of poor quality. The standard errors in the evaluation o...

  3. Computational information geometry for image and signal processing

    CERN Document Server

    Critchley, Frank; Dodson, Christopher


    This book focuses on the application and development of information geometric methods in the analysis, classification and retrieval of images and signals. It provides introductory chapters to help those new to information geometry and applies the theory to several applications. This area has developed rapidly over recent years, propelled by the major theoretical developments in information geometry, efficient data and image acquisition and the desire to process and interpret large databases of digital information. The book addresses both the transfer of methodology to practitioners involved in database analysis and in its efficient computational implementation.

  4. Algorithm-Architecture Matching for Signal and Image Processing

    CERN Document Server

    Gogniat, Guy; Morawiec, Adam; Erdogan, Ahmet


    Advances in signal and image processing together with increasing computing power are bringing mobile technology closer to applications in a variety of domains like automotive, health, telecommunication, multimedia, entertainment and many others. The development of these leading applications, involving a large diversity of algorithms (e.g. signal, image, video, 3D, communication, cryptography) is classically divided into three consecutive steps: a theoretical study of the algorithms, a study of the target architecture, and finally the implementation. Such a linear design flow is reaching its li

  5. Application of digital image processing techniques to astronomical imagery, 1979 (United States)

    Lorre, J. J.


    Several areas of applications of image processing to astronomy were identified and discussed. These areas include: (1) deconvolution for atmospheric seeing compensation; a comparison between maximum entropy and conventional Wiener algorithms; (2) polarization in galaxies from photographic plates; (3) time changes in M87 and methods of displaying these changes; (4) comparing emission line images in planetary nebulae; and (5) log intensity, hue saturation intensity, and principal component color enhancements of M82. Examples are presented of these techniques applied to a variety of objects.

  6. Image processing for safety assessment in civil engineering. (United States)

    Ferrer, Belen; Pomares, Juan C; Irles, Ramon; Espinosa, Julian; Mas, David


    Behavior analysis of construction safety systems is of fundamental importance to avoid accidental injuries. Traditionally, measurements of dynamic actions in civil engineering have been done through accelerometers, but high-speed cameras and image processing techniques can play an important role in this area. Here, we propose using morphological image filtering and Hough transform on high-speed video sequence as tools for dynamic measurements on that field. The presented method is applied to obtain the trajectory and acceleration of a cylindrical ballast falling from a building and trapped by a thread net. Results show that safety recommendations given in construction codes can be potentially dangerous for workers.

  7. Lunar and Planetary Science XXXV: Image Processing and Earth Observations (United States)


    The titles in this section include: 1) Expansion in Geographic Information Services for PIGWAD; 2) Modernization of the Integrated Software for Imagers and Spectrometers; 3) Science-based Region-of-Interest Image Compression; 4) Topographic Analysis with a Stereo Matching Tool Kit; 5) Central Avra Valley Storage and Recovery Project (CAVSARP) Site, Tucson, Arizona: Floodwater and Soil Moisture Investigations with Extraterrestrial Applications; 6) ASE Floodwater Classifier Development for EO-1 HYPERION Imagery; 7) Autonomous Sciencecraft Experiment (ASE) Operations on EO-1 in 2004; 8) Autonomous Vegetation Cover Scene Classification of EO-1 Hyperion Hyperspectral Data; 9) Long-Term Continental Areal Reduction Produced by Tectonic Processes.

  8. SalHUD--A Graphical Interface to Public Health Data in Puerto Rico. (United States)

    Ortiz-Zuazaga, Humberto G; Arce-Corretjer, Roberto; Solá-Sloan, Juan M; Conde, José G


    This paper describes SalHUD, a prototype web-based application for visualizing health data from Puerto Rico. Our initial focus was to provide interactive maps displaying years of potential life lost (YPLL). The public-use mortality file for year 2008 was downloaded from the Puerto Rico Institute of Statistics website. Data was processed with R, Python and EpiInfo to calculate years of potential life lost for the leading causes of death on each of the 78 municipalities in the island. Death records were classified according to ICD-10 codes. YPLL for each municipality was integrated into AtlasPR, a D3 Javascript map library. Additional Javascript, HTML and CSS programing was required to display maps as a web-based interface. YPLL for all municipalities are displayed on a map of Puerto Rico for each of the ten leading causes of death and for all causes combined, so users may dynamically explore the impact of premature mortality. This work is the first step in providing the general public in Puerto Rico with user-friendly, interactive, visual access to public health data that is usually published in numerical, text-based media.

  9. Inspection of surface-mount device images using wavelet processing (United States)

    Carillo, Gerardo; Cabrera, Sergio D.; Portillo, Angel


    In this paper, the wavelet transform is used on surface mount device (SMD) images to devise a system used to inspect the presence of SMDs in printed circuit boards. The complete system involves preprocessing, feature extraction, and classification. The images correspond to three cases: SMD present (SMD), SMD not present with a speck of glue (GLUE), and SMD not present (noSMD). For each case, two images are collected using top and side illuminations but these are first combined into one image before proceeding to do further processing. Preprocessing is done by applying the wavelet transform to the images to expose details. Using 500 images for each of the three cases, various features are considered from different wavelet subbands, using one or several transform levels, to find four good discriminating parameters. Classification is performed sequentially using a two-level binary decision-tree. Two features are combined into a two-component feature vector and are fed into the first level that compares the SMD vs noSMD cases. The second level uses another feature vector produced by combining two other features and then compares the SMD and GLUE cases. The features used give no cluster overlap on the training set and simple parallelpiped classifier is devised at each level of the tree producing no errors on this set. Results give 99.6% correct classification when applied to a separate testing set consisting of 500 images for each case. All the errors are made to level 2 classifying six SMD images erroneously as GLUE.

  10. Extension of SALS to transmural quantitative structural analysis of planar tissues (United States)

    Sacks, Michael S.; Lin, Xiaotong


    Planar fibrous connective tissues are composed of a dense extra-cellular network of collagen and elastin fibers embedded in a ground matrix. Thus, quantification of fiber architecture an important step in developing an understanding of the mechanics of planar tissues. We have extensively used small angle light scattering (SALS) to map the gross fiber orientation of several soft membrane connective tissues using a custom built high speed mapping instrument. However, the current technique is limited to total through-thickness tissue structural analysis. The current study was undertaken to determine the feasibility of obtaining transmural tissue structural information from 2D SALS data. Methods: The basic approach is to utilize precisely aligned serial histological sections cut en-face through a tissue block and obtain 2D fiber structure from each section using SALS. Transumural fiber structure information is then derived by integration of 2D data to form a single data set containing the complete transmural fiber structure. To demonstrate the feasibility of the method, both explanted bioprosthetic heart valve (BHV) and native bovine pericardium (BP) tissues were evaluated. Results: The transmural SALS technique revealed for explanted BHV preferential damage in the fibrosa layer, while for BP variations in transmural fiber architecture were found consistent with optical histology. Conclusions: The transmural SALS technique successfully demonstrated quantitative transmural variations in fiber architecture in two dense collagenous tissues in a rapid, cost-effective approach.

  11. Study of optical techniques for the Ames unitary wind tunnel: Digital image processing, part 6 (United States)

    Lee, George


    A survey of digital image processing techniques and processing systems for aerodynamic images has been conducted. These images covered many types of flows and were generated by many types of flow diagnostics. These include laser vapor screens, infrared cameras, laser holographic interferometry, Schlieren, and luminescent paints. Some general digital image processing systems, imaging networks, optical sensors, and image computing chips were briefly reviewed. Possible digital imaging network systems for the Ames Unitary Wind Tunnel were explored.

  12. How to choose the best embedded processing platform for on-board UAV image processing ?


    Hulens, Dries; Goedemé, Toon; Verbeke, Jon


    Hulens D., Goedemé T., Verbeke J., ''How to choose the best embedded processing platform for on-board UAV image processing ?'', Proceedings 10th international conference on computer vision theory and applications - VISAPP 2015, 10 pp., March 11-14, 2015, Berlin, Germany.

  13. Qualitative and quantitative interpretation of SEM image using digital image processing. (United States)

    Saladra, Dawid; Kopernik, Magdalena


    The aim of the this study is improvement of qualitative and quantitative analysis of scanning electron microscope micrographs by development of computer program, which enables automatic crack analysis of scanning electron microscopy (SEM) micrographs. Micromechanical tests of pneumatic ventricular assist devices result in a large number of micrographs. Therefore, the analysis must be automatic. Tests for athrombogenic titanium nitride/gold coatings deposited on polymeric substrates (Bionate II) are performed. These tests include microshear, microtension and fatigue analysis. Anisotropic surface defects observed in the SEM micrographs require support for qualitative and quantitative interpretation. Improvement of qualitative analysis of scanning electron microscope images was achieved by a set of computational tools that includes binarization, simplified expanding, expanding, simple image statistic thresholding, the filters Laplacian 1, and Laplacian 2, Otsu and reverse binarization. Several modifications of the known image processing techniques and combinations of the selected image processing techniques were applied. The introduced quantitative analysis of digital scanning electron microscope images enables computation of stereological parameters such as area, crack angle, crack length, and total crack length per unit area. This study also compares the functionality of the developed computer program of digital image processing with existing applications. The described pre- and postprocessing may be helpful in scanning electron microscopy and transmission electron microscopy surface investigations. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  14. Design flow for implementing image processing in FPGAs (United States)

    Trakalo, M.; Giles, G.


    A design flow for implementing a dynamic gamma algorithm in an FPGA is described. Real-time video processing makes enormous demands on processing resources. An FPGA solution offers some advantages over commercial video chip and DSP implementation alternatives. The traditional approach to FPGA development involves a system engineer designing, modeling and verifying an algorithm and writing a specification. A hardware engineer uses the specification as a basis for coding in VHDL and testing the algorithm in the FPGA with supporting electronics. This process is work intensive and the verification of the image processing algorithm executing on the FPGA does not occur until late in the program. The described design process allows the system engineer to design and verify a true VHDL version of the algorithm, executing in an FPGA. This process yields reduced risk and development time. The process is achieved by using Xilinx System Generator in conjunction with Simulink® from The MathWorks. System Generator is a tool that bridges the gap between the high level modeling environment and the digital world of the FPGA. System Generator is used to develop the dynamic gamma algorithm for the contrast enhancement of a candidate display product. The results of this effort are to increase the dynamic range of the displayed video, resulting in a more useful image for the user.

  15. Kamusal Karar Alma Süreçlerinde Sosyal Tercih Duyarlılığı ve Sürece İlişkin Yapısal Çözümlemeler(The Sensitivity of Social Preferences in Public Decisions Making Process and The Structural Analyses Connected with The Process

    Directory of Open Access Journals (Sweden)

    A. Niyazi ÖZKER


    Full Text Available In the study, we aim to bring up the social preferences location that have an important influence on the components of process consisting of social-economics together with the dynamics which should be considered by decision makers as the balanced component between the central government characteristics and the social preferences in public decision making process. Also, the dynamics’ negative effects as under approaches management of central governments’ politic strategies on the socially choices by taking shape publically point of view have been aimed to analyze connected with participation process. Hence, it is appear that the decisions’phenomenon directed towards to analyzes appear in very important the structural models in order to ensure the desired reality participation in decision making and also, the provided participation sensitive that lay the gramework for overtaking some paradox related to decision making process must be required to take into consideration and and check over as the primary point especially in the countries that have the structural politics matters.

  16. Slide Set: Reproducible image analysis and batch processing with ImageJ. (United States)

    Nanes, Benjamin A


    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.

  17. Research on Image processing in laser triangulation system

    Energy Technology Data Exchange (ETDEWEB)

    Liu Kai; Wang Qianqian; Wang Yang; Liu Chenrui, E-mail: [School of Optoelectronics, Beijing Institute of Technology, 100081 Beijing (China)


    Laser Triangulation Ranging is a kind of displacement distance measurement method which is based on the principle of optical triangulation using laser as the light source. It is superior in simple structure, high-speed, high-accuracy, anti-jamming capability and adaptability laser triangulation ranging. Therefore it is widely used in various fields such as industrial production, road test, three-dimensional face detection, and so on. In current study the features of the spot images achieved by CCD in laser triangulation system were analyzed, and the appropriate algorithms for spot images were discussed. Experimental results showed that the precision and stability of the spot location were enhanced significantly after applying these image processing algorithms.

  18. Radar image processing for rock-type discrimination (United States)

    Blom, R. G.; Daily, M.


    Image processing and enhancement techniques for improving the geologic utility of digital satellite radar images are reviewed. Preprocessing techniques such as mean and variance correction on a range or azimuth line by line basis to provide uniformly illuminated swaths, median value filtering for four-look imagery to eliminate speckle, and geometric rectification using a priori elevation data. Examples are presented of application of preprocessing methods to Seasat and Landsat data, and Seasat SAR imagery was coregistered with Landsat imagery to form composite scenes. A polynomial was developed to distort the radar picture to fit the Landsat image of a 90 x 90 km sq grid, using Landsat color ratios with Seasat intensities. Subsequent linear discrimination analysis was employed to discriminate rock types from known areas. Seasat additions to the Landsat data improved rock identification by 7%.


    Directory of Open Access Journals (Sweden)

    S Jeyalakshmi


    Full Text Available Plants, for their growth and survival, need 13 mineral nutrients. Toxicity or deficiency in any one or more of these nutrients affects the growth of plant and may even cause the destruction of the plant. Hence, a constant monitoring system for tracking the nutrient status in plants becomes essential for increase in production as well as quality of yield. A diagnostic system using digital image processing would diagnose the deficiency symptoms much earlier than human eyes could recognize. This will enable the farmers to adopt appropriate remedial action in time. This paper focuses on the review of work using image processing techniques for diagnosing nutrient deficiency in plants.

  20. Recent Advances in Techniques for Hyperspectral Image Processing (United States)

    Plaza, Antonio; Benediktsson, Jon Atli; Boardman, Joseph W.; Brazile, Jason; Bruzzone, Lorenzo; Camps-Valls, Gustavo; Chanussot, Jocelyn; Fauvel, Mathieu; Gamba, Paolo; Gualtieri, Anthony; hide


    Imaging spectroscopy, also known as hyperspectral imaging, has been transformed in less than 30 years from being a sparse research tool into a commodity product available to a broad user community. Currently, there is a need for standardized data processing techniques able to take into account the special properties of hyperspectral data. In this paper, we provide a seminal view on recent advances in techniques for hyperspectral image processing. Our main focus is on the design of techniques able to deal with the highdimensional nature of the data, and to integrate the spatial and spectral information. Performance of the discussed techniques is evaluated in different analysis scenarios. To satisfy time-critical constraints in specific applications, we also develop efficient parallel implementations of some of the discussed algorithms. Combined, these parts provide an excellent snapshot of the state-of-the-art in those areas, and offer a thoughtful perspective on future potentials and emerging challenges in the design of robust hyperspectral imaging algorithms

  1. Evolving matched filter transform pairs for satellite image processing (United States)

    Peterson, Michael R.; Horner, Toby; Moore, Frank


    Wavelets provide an attractive method for efficient image compression. For transmission across noisy or bandwidth limited channels, a signal may be subjected to quantization in which the signal is transcribed onto a reduced alphabet in order to save bandwidth. Unfortunately, the performance of the discrete wavelet transform (DWT) degrades at increasing levels of quantization. In recent years, evolutionary algorithms (EAs) have been employed to optimize wavelet-inspired transform filters to improve compression performance in the presence of quantization. Wavelet filters consist of a pair of real-valued coefficient sets; one set represents the compression filter while the other set defines the image reconstruction filter. The reconstruction filter is defined as the biorthogonal inverse of the compression filter. Previous research focused upon two approaches to filter optimization. In one approach, the original wavelet filter is used for image compression while the reconstruction filter is evolved by an EA. In the second approach, both the compression and reconstruction filters are evolved. In both cases, the filters are not biorthogonally related to one another. We propose a novel approach to filter evolution. The EA optimizes a compression filter. Rather than using a wavelet filter or evolving a second filter for reconstruction, the reconstruction filter is computed as the biorthogonal inverse of the evolved compression filter. The resulting filter pair retains some of the mathematical properties of wavelets. This paper compares this new approach to existing filter optimization approaches to determine its suitability for the optimization of image filters appropriate for defense applications of image processing.

  2. Assessment of banana fruit maturity by image processing technique. (United States)

    Surya Prabha, D; Satheesh Kumar, J


    Maturity stage of fresh banana fruit is an important factor that affects the fruit quality during ripening and marketability after ripening. The ability to identify maturity of fresh banana fruit will be a great support for farmers to optimize harvesting phase which helps to avoid harvesting either under-matured or over-matured banana. This study attempted to use image processing technique to detect the maturity stage of fresh banana fruit by its color and size value of their images precisely. A total of 120 images comprising 40 images from each stage such as under-mature, mature and over-mature were used for developing algorithm and accuracy prediction. The mean color intensity from histogram; area, perimeter, major axis length and minor axis length from the size values, were extracted from the calibration images. Analysis of variance between each maturity stage on these features indicated that the mean color intensity and area features were more significant in predicting the maturity of banana fruit. Hence, two classifier algorithms namely, mean color intensity algorithm and area algorithm were developed and their accuracy on maturity detection was assessed. The mean color intensity algorithm showed 99.1 % accuracy in classifying the banana fruit maturity. The area algorithm classified the under-mature fruit at 85 % accuracy. Hence the maturity assessment technique proposed in this paper could be used commercially to develop a field based complete automatic detection system to take decision on the right time of harvest by the banana growers.

  3. Investigations in optoelectronic image processing in scanning laser microscopy (United States)

    Chaliha, Hiranya Kumar

    A considerable amount of work has been done on scann-ing laser microscopy since its applications were first pointed out by Roberts and Young[1], Minsky [2] and Davidovits et al [3]. The advent of laser has made it possible to focus an intense beam of laser light in a scanning optical microscope (SOM) [4, 5] and hence explore regions of microscopy[6] uncovered by conven-tional microscopy. In the simple SOM [7, 8, 9], the upper spatial frequency in amplitude transmittance or reflectance of an object for which transfer function is nonzero is same as that in a conventional optical microscope. However, in Type II SOM [7] or confocal SOM that employs a coherent or a point detector, the spatial frequency bandwidth is twice that obtained in a conventional microscope. Besides this confocal set-up is found to be very useful in optical sectioning and consequently in 3-D image processing[10, 11, 12] specially of biological specimens. Such systems are also suitable for studies of semiconductor materials [13], super-resolution [14] and various imaginative ways of image processing[15, 16, 17] including phase imaging[18]. A brief survey of related advances in scanning optical microscopy has been covered in the chapter 1 of the thesis. The performance of SOM may be investigated by concent-rating also on signal derived by one dimensional scan of the object specimen. This simplified mode may also be adapted to give wealth of information for biological and semiconductor specimens. Hence we have investigated the design of a scanning laser system suited specifically for studies of line scan image signals of microscopic specimens when probed through a focused laser spot. An electro-mechanical method of scanning of the object specimen has been designed with this aim in mind. Chapter 2, Part A of the thesis deals with the design consider-ations of such a system. For analysis of scan signals at a later instant of time so as to facilitate further processing, an arrangement of microprocessor

  4. VIP: Vortex Image Processing Package for High-contrast Direct Imaging (United States)

    Gomez Gonzalez, Carlos Alberto; Wertz, Olivier; Absil, Olivier; Christiaens, Valentin; Defrère, Denis; Mawet, Dimitri; Milli, Julien; Absil, Pierre-Antoine; Van Droogenbroeck, Marc; Cantalloube, Faustine; Hinz, Philip M.; Skemer, Andrew J.; Karlsson, Mikael; Surdej, Jean


    We present the Vortex Image Processing (VIP) library, a python package dedicated to astronomical high-contrast imaging. Our package relies on the extensive python stack of scientific libraries and aims to provide a flexible framework for high-contrast data and image processing. In this paper, we describe the capabilities of VIP related to processing image sequences acquired using the angular differential imaging (ADI) observing technique. VIP implements functionalities for building high-contrast data processing pipelines, encompassing pre- and post-processing algorithms, potential source position and flux estimation, and sensitivity curve generation. Among the reference point-spread function subtraction techniques for ADI post-processing, VIP includes several flavors of principal component analysis (PCA) based algorithms, such as annular PCA and incremental PCA algorithms capable of processing big datacubes (of several gigabytes) on a computer with limited memory. Also, we present a novel ADI algorithm based on non-negative matrix factorization, which comes from the same family of low-rank matrix approximations as PCA and provides fairly similar results. We showcase the ADI capabilities of the VIP library using a deep sequence on HR 8799 taken with the LBTI/LMIRCam and its recently commissioned L-band vortex coronagraph. Using VIP, we investigated the presence of additional companions around HR 8799 and did not find any significant additional point source beyond the four known planets. VIP is available at and is accompanied with Jupyter notebook tutorials illustrating the main functionalities of the library.

  5. Work Around Distributed Image Processing and Workflow Management (United States)

    Schaaff, A.; Bonnarel, F.; Claudon, J.-J.; Louys, M.; Pestel, C.; David, R.; Genaud, S.; Louys, M.; Wolf, C.


    Many people develop tools for image processing in various languages (C, C++, FORTRAN, MATLAB, etc) but do not diffuse them. One of the reasons is the portability and also the difficulty to make them collaborate with other tools. We have developed an architecture in which such tools can be wrapped and accessed through a standardized way (CGI and Web Services to use them in other applications, a Java Applet to use them directly). We have also developed Workflow libraries (client and server sides) to enable the (easy) creation and management of more complex tasks. The initially isolated tasks can now be associated into complex workflows. We are now working on the distribution of this architecture to enable the creation of image processing nodes.


    Directory of Open Access Journals (Sweden)

    A. N. Chichko


    Full Text Available The paper proposes a mathematical apparatus for image processing of a cast-iron microstructure of a pearlite class that has casually distributed graphite inclusions in the structure the software has been developed and it allows to determine statistical functions concerning distribution of graphite inclusion characteristics according to areas, perimeters and distances between inclusions. The paper shows that computer processing of gray pig-iron microstructure image makes it possible to classify microstructures on the basis of statistical distribution of a graphite phase which are considered as indiscernible while applying conventional metallographic methods and it has practical significance for investigation of the interrelations – «workability – cast iron microstructure».

  7. Characterization of Periodically Poled Nonlinear Materials Using Digital Image Processing (United States)


    20 Figure 25. Calculated and measured threshold of PPLN crystal. ....................... 21 Figure 26. Representative diagram (using a motorized stage or manually) of an etched material, in this case periodically poled lithium niobate ( PPLN ), using a reasonably high...validate the accuracy of the image processing routines. The setup is shown in Figure 24. Where the 1.064µm wavelength laser beam reaches the PPLN crystal

  8. A study of a photoelectrooptical light modulator for image processing

    Energy Technology Data Exchange (ETDEWEB)

    Bun, A.Z.; Feldvush, V.I.; Merkin, S.U.; Mezhevenko, E.S.; Oparin, A.N.; Potaturkin, O.P.; Sherbakov, G.N.


    The operation of a photoelectrooptical spatial modulator as one element in an optoelectronic system is studied during the on-line introduction and initial processing of perceptible images. Possible contouring variations for use with this scheme are analyzed. The use of such a modulator in conjunction with a holographic intensity correlator, upon which the system is based, makes it possible to produce quasioptimal recognition algorithms. Experimental results are given.


    Directory of Open Access Journals (Sweden)

    Rajdeep Mitra


    Full Text Available White Sponge Nevus is a rear hereditary disease in human causes incurable white lesions in oral mucosa. Appropriate history, clinical examination along with biopsy and cytological studies are helpful for diagnosis of this disorder. Identification can also be made in alternative way by applying image processing technique using Watershed segmentation with MATLAB software. The applied techniques are effective and reliable for early accurate detection of the disease as alternative of expertise clinical and time taking laboratory investigations.

  10. Application of digital image processing for pot plant grading


    Dijkstra, J.


    The application of digital image processing for grading of pot plants has been studied. Different techniques e.q. plant part identification based on knowledge based segmentation, have been developed to measure features of plants in different growth stage. Growth experiments were performed to identify grading features and to test whether it is possible to grade pot plants in homogeneous groups. Judgement experiments were performed to test whether it is possible to grade plants as good...

  11. Digital image processing of earth observation sensor data (United States)

    Bernstein, R.


    This paper describes digital image processing techniques that were developed to precisely correct Landsat multispectral earth observation data and gives illustrations of the results achieved, e.g., geometric corrections with an error of less than one picture element, a relative error of one-fourth picture element, and no radiometric error effect. Techniques for enhancing the sensor data, digitally mosaicking multiple scenes, and extracting information are also illustrated.

  12. Mørtelegenskaber og billedbehandling (Mortar properties and image processing)

    DEFF Research Database (Denmark)

    Nielsen, Anders


    The properties of lime mortars can be essentially improved by adding fillers to the mortars in an intelligent way. This is shown in the thesis of Thorborg von Konow (1997).The changes in the pore structure and the following changes in properties can be treated by means of the rules in materials m...... mechanics developed by Lauge Fuglsang Nielsen on this institute. The necessary pore characteristics are measured by means of image processing....

  13. Mass Processing of Sentinel-1 Images for Maritime Surveillance

    Directory of Open Access Journals (Sweden)

    Carlos Santamaria


    Full Text Available The free, full and open data policy of the EU’s Copernicus programme has vastly increased the amount of remotely sensed data available to both operational and research activities. However, this huge amount of data calls for new ways of accessing and processing such “big data”. This paper focuses on the use of Copernicus’s Sentinel-1 radar satellite for maritime surveillance. It presents a study in which ship positions have been automatically extracted from more than 11,500 Sentinel-1A images collected over the Mediterranean Sea, and compared with ship position reports from the Automatic Identification System (AIS. These images account for almost all the Sentinel-1A acquisitions taken over the area during the two-year period from the start of the operational phase in October 2014 until September 2016. A number of tools and platforms developed at the European Commission’s Joint Research Centre (JRC that have been used in the study are described in the paper. They are: (1 Search for Unidentified Maritime Objects (SUMO, a tool for ship detection in Synthetic Aperture Radar (SAR images; (2 the JRC Earth Observation Data and Processing Platform (JEODPP, a platform for efficient storage and processing of large amounts of satellite images; and (3 Blue Hub, a maritime surveillance GIS and data fusion platform. The paper presents the methodology and results of the study, giving insights into the new maritime surveillance knowledge that can be gained by analysing such a large dataset, and the lessons learnt in terms of handling and processing the big dataset.

  14. Obtenção dos tempos de equilíbrio e coeficientes de difusão de ácido e de sal para desenhar o processo de marinado de filés de Engraulis anchoita Obtainment of equilibrium times and diffusion coefficients of acid and salt to design the marinating process of Engraulis anchoita fillets

    Directory of Open Access Journals (Sweden)

    María Rosa Casales


    Full Text Available Foi estudada a difusão simultânea de cloreto de sódio e ácido acético em filés de anchoíta. Os resultados mostram que foram alcançadas condições de equilíbrio entre os filés e a solução de marinado, de acordo com os valores K obtidos para fechar a unidade. A solução de marinado precisou de agitação para evitar a formação de uma camada superficial diluída nos filés de pescado. Os tempos de equilíbrio para ácido e sal foram maiores para as amostras agitadas durante o marinado. O gasto de cloreto de sódio e ácido acético durante o marinado dos filés de anchoíta pode ser explicado pela lei de Fick. Os valores do coeficiente de difusão para ácido acético a 20 °C foram 3,39 × 10-6 e 3,49 × 10-6 cm²/segundos para marinado com e sem agitação, respectivamente. O valor do coeficiente de difusão de cloreto de sódio para marinado sem agitação foi de 2,39 × 10-7 cm²/segundos, contudo, para marinado com agitação, não pôde ser fixado.Simultaneous diffusion of sodium chloride and acetic acid in anchovy fillets was studied. The results show that equilibrium conditions between the fillets and the marinating solution were reached according to the K values obtained close to unity. The marinating solution was stirred to prevent the formation of a diluted surface layer on the fish fillets. The equilibrium times for acid and salt were higher for the samples stirred during marinating. The sodium chloride and acetic acid uptake during marinating of anchovy fillets can be explained by Fick's law. The values of the diffusion coefficient for acetic acid at 20 °C were 3.39 × 10-6 and 3.49 × 10-6 cm²/seconds for marinating with and without agitation, respectively. The value of the diffusion coefficient for sodium chloride for marinating without agitation was 2.39 × 10-7 cm²/seconds, while it could not be obtained for marinating with agitation.

  15. The evolutionary history of the SAL1 gene family in eutherian mammals

    Directory of Open Access Journals (Sweden)

    Callebaut Isabelle


    Full Text Available Abstract Background SAL1 (salivary lipocalin is a member of the OBP (Odorant Binding Protein family and is involved in chemical sexual communication in pig. SAL1 and its relatives may be involved in pheromone and olfactory receptor binding and in pre-mating behaviour. The evolutionary history and the selective pressures acting on SAL1 and its orthologous genes have not yet been exhaustively described. The aim of the present work was to study the evolution of these genes, to elucidate the role of selective pressures in their evolution and the consequences for their functions. Results Here, we present the evolutionary history of SAL1 gene and its orthologous genes in mammals. We found that (1 SAL1 and its related genes arose in eutherian mammals with lineage-specific duplications in rodents, horse and cow and are lost in human, mouse lemur, bushbaby and orangutan, (2 the evolution of duplicated genes of horse, rat, mouse and guinea pig is driven by concerted evolution with extensive gene conversion events in mouse and guinea pig and by positive selection mainly acting on paralogous genes in horse and guinea pig, (3 positive selection was detected for amino acids involved in pheromone binding and amino acids putatively involved in olfactory receptor binding, (4 positive selection was also found for lineage, indicating a species-specific strategy for amino acid selection. Conclusions This work provides new insights into the evolutionary history of SAL1 and its orthologs. On one hand, some genes are subject to concerted evolution and to an increase in dosage, suggesting the need for homogeneity of sequence and function in certain species. On the other hand, positive selection plays a role in the diversification of the functions of the family and in lineage, suggesting adaptive evolution, with possible consequences for speciation and for the reinforcement of prezygotic barriers.

  16. Comparison of Small Unmanned Aerial Vehicles Performance Using Image Processing

    Directory of Open Access Journals (Sweden)

    Esteban Cano


    Full Text Available Precision agriculture is a farm management technology that involves sensing and then responding to the observed variability in the field. Remote sensing is one of the tools of precision agriculture. The emergence of small unmanned aerial vehicles (sUAV have paved the way to accessible remote sensing tools for farmers. This paper describes the development of an image processing approach to compare two popular off-the-shelf sUAVs: 3DR Iris+ and DJI Phantom 2. Both units are equipped with a camera gimbal attached with a GoPro camera. The comparison of the two sUAV involves a hovering test and a rectilinear motion test. In the hovering test, the sUAV was allowed to hover over a known object and images were taken every quarter of a second for two minutes. For the image processing evaluation, the position of the object in the images was measured and this was used to assess the stability of the sUAV while hovering. In the rectilinear test, the sUAV was allowed to follow a straight path and images of a lined track were acquired. The lines on the images were then measured on how accurate the sUAV followed the path. The hovering test results show that the 3DR Iris+ had a maximum position deviation of 0.64 m (0.126 m root mean square RMS displacement while the DJI Phantom 2 had a maximum deviation of 0.79 m (0.150 m RMS displacement. In the rectilinear motion test, the maximum displacement for the 3DR Iris+ and the DJI phantom 2 were 0.85 m (0.134 m RMS displacement and 0.73 m (0.372 m RMS displacement. These results demonstrated that the two sUAVs performed well in both the hovering test and the rectilinear motion test and thus demonstrated that both sUAVs can be used for civilian applications such as agricultural monitoring. The results also showed that the developed image processing approach can be used to evaluate performance of a sUAV and has the potential to be used as another feedback control parameter for autonomous navigation.

  17. Quantitative assessment of susceptibility weighted imaging processing methods (United States)

    Li, Ningzhi; Wang, Wen-Tung; Sati, Pascal; Pham, Dzung L.; Butman, John A.


    Purpose To evaluate different susceptibility weighted imaging (SWI) phase processing methods and parameter selection, thereby improving understanding of potential artifacts, as well as facilitating choice of methodology in clinical settings. Materials and Methods Two major phase processing methods, Homodyne-filtering and phase unwrapping-high pass (HP) filtering, were investigated with various phase unwrapping approaches, filter sizes, and filter types. Magnitude and phase images were acquired from a healthy subject and brain injury patients on a 3T clinical Siemens MRI system. Results were evaluated based on image contrast to noise ratio and presence of processing artifacts. Results When using a relatively small filter size (32 pixels for the matrix size 512 × 512 pixels), all Homodyne-filtering methods were subject to phase errors leading to 2% to 3% masked brain area in lower and middle axial slices. All phase unwrapping-filtering/smoothing approaches demonstrated fewer phase errors and artifacts compared to the Homodyne-filtering approaches. For performing phase unwrapping, Fourier-based methods, although less accurate, were 2–4 orders of magnitude faster than the PRELUDE, Goldstein and Quality-guide methods. Conclusion Although Homodyne-filtering approaches are faster and more straightforward, phase unwrapping followed by HP filtering approaches perform more accurately in a wider variety of acquisition scenarios. PMID:24923594

  18. Image processing software for enhanced visualization of faint or noisy autoradiographic images. (United States)

    Yee, T


    A computer program for digital image processing is described which can be implemented using scanning densitometer hardware pre-existing in most biology departments plus computer video hardware which may either pre-exist in the biology department or would represent a moderate upgrade over an already planned computer purchase. The primary purpose of this computer program is to provide contrast enhancement of faint or low contrast autoradiograph images and to implement background subtraction and digital smoothing methods which permit visualization of blurry electrophoresis bands against noisy backgrounds. However, the program also has modest editing capabilities that allow its use in the routine preparation of images for publication. Finally, the program has facilities for deblurring, edge enhancement and multiple image averaging, which give it usefulness in other forms of photographic analysis.

  19. O salário na obra de Frederick Winslow Taylor


    Silva,Victor Paulo Gomes da


    O presente artigo analisa e explica a perspectiva de Frederick Winslow Taylor sobre o salário, tal como enunciada em suas duas grandes obras: Shop management (1903) e Principles of scientific management (1911). A primeira parte consubstancia-se na apresentação de aspectos econômicos relevantes que caracterizaram o tempo em que ele viveu e o quanto influenciaram suas obras. Na segunda parte, é efetuada uma análise da forma como o salário é apresentado nas duas obras de F. W. Taylor. O artigo t...

  20. Striving for Diversity, Accessibility and Quality: Evaluating SiSAL Journal

    Directory of Open Access Journals (Sweden)

    Jo Mynard


    Full Text Available After establishing a journal, it is important to evaluate its progress to ensure that the principles that underpin its existence continue to be a priority. In this article, the author reports on measures that were used to evaluate Studies in Self-Access Learning (SiSAL Journal. The research was designed to investigate the three principles that the journal values: diversity, accessibility and quality. The results identified some successful factors such as accessibility and favourable perceptions of SiSAL Journal’s quality. However, the results also identified areas that could be improved to further increase diversity and to encourage submissions from more authors based in different locations.

  1. Optimized Laplacian image sharpening algorithm based on graphic processing unit (United States)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah


    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  2. Interpretation of medical imaging data with a mobile application: a mobile digital imaging processing environment. (United States)

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J; Ullmann, Jeremy F P; Janke, Andrew L


    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users' expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services.

  3. Interpretation of Medical Imaging Data with a Mobile Application: A Mobile Digital Imaging Processing Environment (United States)

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J.; Ullmann, Jeremy F. P.; Janke, Andrew L.


    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users’ expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services. PMID:23847587

  4. Marine geophysical data—Point Sal to Refugio State Beach, southern California (United States)

    Johnson, Samuel Y.; Hartwell, Stephen; Beeson, Jeffrey


    This data release includes approximately 1,032 km of marine single-channel seismic-reflection data collected by the U.S. Geological Survey on a research cruise (USGS survey 2014-632-FA) in July and August, 2014, between Point Sal and Refugio State Beach. The dataset includes 168 profiles, most of which were collected on tracklines roughly perpendicular to the coast at 1 km line spacing; additional profiles were collected on coast-parallel tie lines. These data were acquired to support the California Seafloor Mapping Program and USGS Geologic Hazards projects.Seismic-reflection data were collected using a minisparker system that creates an acoustic signal by discharging an electrical pulse between electrodes and a ground that generates a frequency spectrum roughly between 200 and 1,600 Hz. At boat speeds of 4 to 4.5 nm/hour, seismic traces were collected roughly every 1 to 2 meters. Water depths were generally between 50 m and 150 m, but as shallow as 10 meters near the shoreline, and as deep as 480 m for profiles crossing the Santa Barbara basin. Acoustic pulses were generated at 0.5-second intervals on most profiles; a 1-second interval was used on the few profiles collected in deeper water. Standard SEG-Y files were generated using a Triton Subbottom Logger (SBL). Seismic data processing was accomplished using Sioseis, a public-domain software developed at Scripps Institute of Oceanography (part of the University of California at San Diego). The processing of these data consisted of a bandpass filter, mute function, automatic gain control, water bottom detect, swell correction, and scaling/plotting. Both raw data in SEG-Y format and processed data (“Corrected SEG-Y”) are provided in this data release. 


    Directory of Open Access Journals (Sweden)

    Murinto ,


    Full Text Available Salah satu metode perbaikan citra (image enhancement adalah dengan cara ekualisasi histogram equalization dan Logarithmic Image Processing (LIP. Kedua metode tersebut memiliki algoritma yang berbeda dan belum diketahui keunggulan dan kelemahan dari masing-masing metode. Penelitian ini membandingkan kinerja antara metode ekualisasi histogram dan LIP dalam memperbaiki kualitas kecemerlangan citra. Jenis gambar yang digunakan berekstensi *.bmp (bitmap berformat 24 bit dengan ukuran pixel yang tidak dibatasi. Citra tersebut kemudian dimasukan ke dalam program lalu dilakukan proses ekualisasi histogram dan LIP. Adapun parameter yang digunakan adalah citra hasil, histogram, timming-run dan signal-to-noise (SNR. Pengujian dilakukan dengan metode Black Box Tes dan Alfa Test. Hasil penelitian dari beberapa sampel citra yang diujikan menunjukan bahwa pendistribusian nilai intensitas piksel menggunakan LIP dapat memberikan kualitas citra yang lebih baik bila dilihat dari secara visual  meskipun memerlukan waktu proses lama dibandingkan dengan metode ekualisasi histogram tetapi bila dilihat dari segi SNR metode Logarithmic Image Processing lebih unggul.

  6. Measurement of smaller colon polyp in CT colonography images using morphological image processing. (United States)

    Manjunath, K N; Siddalingaswamy, P C; Prabhu, G K


    Automated measurement of the size and shape of colon polyps is one of the challenges in Computed tomography colonography (CTC). The objective of this retrospective study was to improve the sensitivity and specificity of smaller polyp measurement in CTC using image processing techniques. A domain knowledge-based method has been implemented with hybrid method of colon segmentation, morphological image processing operators for detecting the colonic structures, and the decision-making system for delineating the smaller polyp-based on a priori knowledge. The method was applied on 45 CTC dataset. The key finding was that the smaller polyps were accurately measured. In addition to 6-9 mm range, polyps of even processing. It takes [Formula: see text] min for measuring the smaller polyp in a dataset of 500 CTC images. With this method, [Formula: see text] and [Formula: see text] were achieved. The domain-based approach with morphological image processing has given good results. The smaller polyps were measured accurately which helps in making right clinical decisions. Qualitatively and quantitatively the results were acceptable when compared to the ground truth at [Formula: see text].

  7. Laser doppler blood flow imaging using a CMOS imaging sensor with on-chip signal processing. (United States)

    He, Diwei; Nguyen, Hoang C; Hayes-Gill, Barrie R; Zhu, Yiqun; Crowe, John A; Gill, Cally; Clough, Geraldine F; Morgan, Stephen P


    The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF) imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.

  8. Laser Doppler Blood Flow Imaging Using a CMOS Imaging Sensor with On-Chip Signal Processing

    Directory of Open Access Journals (Sweden)

    Cally Gill


    Full Text Available The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.

  9. Post-processing strategies in image scanning microscopy. (United States)

    McGregor, J E; Mitchell, C A; Hartell, N A


    Image scanning microscopy (ISM) coupled with pixel reassignment offers a resolution improvement of √2 over standard widefield imaging. By scanning point-wise across the specimen and capturing an image of the fluorescent signal generated at each scan position, additional information about specimen structure is recorded and the highest accessible spatial frequency is doubled. Pixel reassignment can be achieved optically in real time or computationally a posteriori and is frequently combined with the use of a physical or digital pinhole to reject out of focus light. Here, we simulate an ISM dataset using a test image and apply standard and non-standard processing methods to address problems typically encountered in computational pixel reassignment and pinholing. We demonstrate that the predicted improvement in resolution is achieved by applying standard pixel reassignment to a simulated dataset and explore the effect of realistic displacements between the reference and true excitation positions. By identifying the position of the detected fluorescence maximum using localisation software and centring the digital pinhole on this co-ordinate before scaling around translated excitation positions, we can recover signal that would otherwise be degraded by the use of a pinhole aligned to an inaccurate excitation reference. This strategy is demonstrated using experimental data from a multiphoton ISM instrument. Finally we investigate the effect that imaging through tissue has on the positions of excitation foci at depth and observe a global scaling with respect to the applied reference grid. Using simulated and experimental data we explore the impact of a globally scaled reference on the ISM image and, by pinholing around the detected maxima, recover the signal across the whole field of view. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Airborne Laser Scanning and Image Processing Techniques for Archaeological Prospection (United States)

    Faltýnová, M.; Nový, P.


    Aerial photography was, for decades, an invaluable tool for archaeological prospection, in spite of the limitation of this method to deforested areas. The airborne laser scanning (ALS) method can be nowadays used to map complex areas and suitable complement earlier findings. This article describes visualization and image processing methods that can be applied on digital terrain models (DTMs) to highlight objects hidden in the landscape. Thanks to the analysis of visualized DTM it is possible to understand the landscape evolution including the differentiation between natural processes and human interventions. Different visualization methods were applied on a case study area. A system of parallel tracks hidden in a forest and its surroundings - part of old route called "Devil's Furrow" near the town of Sázava was chosen. The whole area around well known part of Devil's Furrow has not been prospected systematically yet. The data from the airborne laser scanning acquired by the Czech Office for Surveying, Mapping and Cadastre was used. The average density of the point cloud was approximately 1 point/m2 The goal of the project was to visualize the utmost smallest terrain discontinuities, e.g. tracks and erosion furrows, which some were not wholly preserved. Generally we were interested in objects that are clearly not visible in DTMs displayed in the form of shaded relief. Some of the typical visualization methods were tested (shaded relief, aspect and slope image). To get better results we applied image-processing methods that were successfully used on aerial photographs or hyperspectral images in the past. The usage of different visualization techniques on one site allowed us to verify the natural character of the southern part of Devil's Furrow and find formations up to now hidden in the forests.

  11. 76 FR 55944 - In the Matter of Certain Electronic Devices With Image Processing Systems, Components Thereof... (United States)


    ... COMMISSION In the Matter of Certain Electronic Devices With Image Processing Systems, Components Thereof, and... importation of certain electronic devices with image processing systems, components thereof, and associated... having graphics processing units (``GPUs'') supplied by NVIDIA Corporation (``NVIDIA'') infringe any...

  12. Image processing methods and architectures in diagnostic pathology.

    Directory of Open Access Journals (Sweden)

    Oscar DĂŠniz


    Full Text Available Grid technology has enabled the clustering and the efficient and secure access to and interaction among a wide variety of geographically distributed resources such as: supercomputers, storage systems, data sources, instruments and special devices and services. Their main applications include large-scale computational and data intensive problems in science and engineering. General grid structures and methodologies for both software and hardware in image analysis for virtual tissue-based diagnosis has been considered in this paper. This methods are focus on the user level middleware. The article describes the distributed programming system developed by the authors for virtual slide analysis in diagnostic pathology. The system supports different image analysis operations commonly done in anatomical pathology and it takes into account secured aspects and specialized infrastructures with high level services designed to meet application requirements. Grids are likely to have a deep impact on health related applications, and therefore they seem to be suitable for tissue-based diagnosis too. The implemented system is a joint application that mixes both Web and Grid Service Architecture around a distributed architecture for image processing. It has shown to be a successful solution to analyze a big and heterogeneous group of histological images under architecture of massively parallel processors using message passing and non-shared memory.

  13. Image processing methods and architectures in diagnostic pathology. (United States)

    Bueno, Gloria; Déniz, Oscar; Salido, Jesús; Rojo, Marcial García


    Grid technology has enabled the clustering and the efficient and secure access to and interaction among a wide variety of geographically distributed resources such as: supercomputers, storage systems, data sources, instruments and special devices and services. Their main applications include large-scale computational and data intensive problems in science and engineering. General grid structures and methodologies for both software and hardware in image analysis for virtual tissue-based diagnosis has been considered in this paper. This methods are focus on the user level middleware. The article describes the distributed programming system developed by the authors for virtual slide analysis in diagnostic pathology. The system supports different image analysis operations commonly done in anatomical pathology and it takes into account secured aspects and specialized infrastructures with high level services designed to meet application requirements. Grids are likely to have a deep impact on health related applications, and therefore they seem to be suitable for tissue-based diagnosis too. The implemented system is a joint application that mixes both Web and Grid Service Architecture around a distributed architecture for image processing. It has shown to be a successful solution to analyze a big and heterogeneous group of histological images under architecture of massively parallel processors using message passing and non-shared memory.

  14. Automated Coronal Loop Identification Using Digital Image Processing Techniques (United States)

    Lee, Jong K.; Gary, G. Allen; Newman, Timothy S.


    The results of a master thesis project on a study of computer algorithms for automatic identification of optical-thin, 3-dimensional solar coronal loop centers from extreme ultraviolet and X-ray 2-dimensional images will be presented. These center splines are proxies of associated magnetic field lines. The project is pattern recognition problems in which there are no unique shapes or edges and in which photon and detector noise heavily influence the images. The study explores extraction techniques using: (1) linear feature recognition of local patterns (related to the inertia-tensor concept), (2) parametric space via the Hough transform, and (3) topological adaptive contours (snakes) that constrains curvature and continuity as possible candidates for digital loop detection schemes. We have developed synthesized images for the coronal loops to test the various loop identification algorithms. Since the topology of these solar features is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information in the identification process. Results from both synthesized and solar images will be presented.

  15. Ameliorating mammograms by using novel image processing algorithms (United States)

    Pillai, A.; Kwartowitz, D.


    Mammography is one of the most important tools for the early detection of breast cancer typically through detection of characteristic masses and/or micro calcifications. Digital mammography has become commonplace in recent years. High quality mammogram images are large in size, providing high-resolution data. Estimates of the false negative rate for cancers in mammography are approximately 10%-30%. This may be due to observation error, but more frequently it is because the cancer is hidden by other dense tissue in the breast and even after retrospective review of the mammogram, cannot be seen. In this study, we report on the results of novel image processing algorithms that will enhance the images providing decision support to reading physicians. Techniques such as Butterworth high pass filtering and Gabor filters will be applied to enhance images; followed by segmentation of the region of interest (ROI). Subsequently, the textural features will be extracted from the ROI, which will be used to classify the ROIs as either masses or non-masses. Among the statistical methods most used for the characterization of textures, the co-occurrence matrix makes it possible to determine the frequency of appearance of two pixels separated by a distance, at an angle from the horizontal. This matrix contains a very large amount of information that is complex. Therefore, it is not used directly but through measurements known as indices of texture such as average, variance, energy, contrast, correlation, normalized correlation and entropy.

  16. Computerized image processing in the Reginald Denny beating trial (United States)

    Morrison, Lawrence C.


    New image processing techniques may have significant benefits to law enforcement officials but need to be legally admissible in court. Courts have different tests for determining the admissibility of new scientific procedures, requiring their reliability to be established by expert testimony. The first test developed was whether there has been general acceptance of the new procedure within the scientific community. In 1993 the U.S. Supreme Court loosened the requirements for admissibility of new scientific techniques, although the California Supreme Court later retained the general acceptance test. What the proper standard is for admission of such evidence is important to both the technical community and to the legal community because of the conflict between benefits of rapidly developing technology, and the dangers of 'junk science.' The Reginald Denny beating case from the 1992 Los Angeles riots proved the value of computerized image processing in identifying persons committing crimes on videotape. The segmentation process was used to establish the presence of a tattoo on one defendant, which was key in his identification. Following the defendant's conviction, the California Court of Appeal approved the use of the evidence involving the segmentation process. This published opinion may be cited as legal precedent.

  17. Quantitative analysis of histopathological findings using image processing software. (United States)

    Horai, Yasushi; Kakimoto, Tetsuhiro; Takemoto, Kana; Tanaka, Masaharu


    In evaluating pathological changes in drug efficacy and toxicity studies, morphometric analysis can be quite robust. In this experiment, we examined whether morphometric changes of major pathological findings in various tissue specimens stained with hematoxylin and eosin could be recognized and quantified using image processing software. Using Tissue Studio, hypertrophy of hepatocytes and adrenocortical cells could be quantified based on the method of a previous report, but the regions of red pulp, white pulp, and marginal zones in the spleen could not be recognized when using one setting condition. Using Image-Pro Plus, lipid-derived vacuoles in the liver and mucin-derived vacuoles in the intestinal mucosa could be quantified using two criteria (area and/or roundness). Vacuoles derived from phospholipid could not be quantified when small lipid deposition coexisted in the liver and adrenal cortex. Mononuclear inflammatory cell infiltration in the liver could be quantified to some extent, except for specimens with many clustered infiltrating cells. Adipocyte size and the mean linear intercept could be quantified easily and efficiently using morphological processing and the macro tool equipped in Image-Pro Plus. These methodologies are expected to form a base system that can recognize morphometric features and analyze quantitatively pathological findings through the use of information technology.

  18. Wear monitoring of protective nitride coatings using image processing

    DEFF Research Database (Denmark)

    Rasmussen, Inge Lise; Guibert, M.; Belin, M.


    -meter with up to 105 19 repetitive cycles, eventually leaving the embedded TiN signal layer uncovered at the bottom the wear scar. 20 The worn surface was characterized by subsequent image processing. A color detection of the wear scar with 21 the exposed TiN layer by a simple optical imaging system showed......A double-layer model system, consisting of a thin layer of tribological titanium aluminum nitride (TiAlN) on 17 top of titanium nitride (TiN), was deposited on polished 100Cr6 steel substrates. The TiAlN top-coatings 18 were exposed to abrasive wear by a reciprocating wear process in a linear tribo...... a significant increase up to a factor of 2 of 22 the relative color values from the TiAlN top layers to the embedded TiN signal layers. This behavior agrees 23 well with the results of reflectance detection experiment with a red laser optical system on the same system. 24 Thus we have demonstrated that image...

  19. Thermal and Visible Satellite Image Fusion Using Wavelet in Remote Sensing and Satellite Image Processing (United States)

    Ahrari, A. H.; Kiavarz, M.; Hasanlou, M.; Marofi, M.


    Multimodal remote sensing approach is based on merging different data in different portions of electromagnetic radiation that improves the accuracy in satellite image processing and interpretations. Remote Sensing Visible and thermal infrared bands independently contain valuable spatial and spectral information. Visible bands make enough information spatially and thermal makes more different radiometric and spectral information than visible. However low spatial resolution is the most important limitation in thermal infrared bands. Using satellite image fusion, it is possible to merge them as a single thermal image that contains high spectral and spatial information at the same time. The aim of this study is a performance assessment of thermal and visible image fusion quantitatively and qualitatively with wavelet transform and different filters. In this research, wavelet algorithm (Haar) and different decomposition filters (mean.linear,ma,min and rand) for thermal and panchromatic bands of Landast8 Satellite were applied as shortwave and longwave fusion method . Finally, quality assessment has been done with quantitative and qualitative approaches. Quantitative parameters such as Entropy, Standard Deviation, Cross Correlation, Q Factor and Mutual Information were used. For thermal and visible image fusion accuracy assessment, all parameters (quantitative and qualitative) must be analysed with respect to each other. Among all relevant statistical factors, correlation has the most meaningful result and similarity to the qualitative assessment. Results showed that mean and linear filters make better fused images against the other filters in Haar algorithm. Linear and mean filters have same performance and there is not any difference between their qualitative and quantitative results.

  20. MultiPADDI-2 board for image processing (United States)

    Srini, Vason P.; Chow, Nelson; Sutton, Roy A.; Rabaey, Jan M.


    We have constructed a prototype image processing board containing 384 processors in 8 VLSI chips. The goal of the prototype is to show how fine grain parallelism present in image processing applications can be exploited by using lots of simple processors interconnected in clever ways. Each processor has a 16-bit data path, a simple instruction set containing 12 instructions, a simple control unit, and a scan chain for loading data and program. Each VLSI chip, called PADDI-2, contains 48 processors. The programing model used for the processors in MIMD. Each processor has 8 words in the instruction memory. There are internal registers and queues in a processor for storing data and partial results. Data is assumed to be entering the system as a stream and processed by the processors. Each VLSI chip is connected to an external memory (64 K by 16). A hardware synchronization mechanism is used for communication between processors, memory, and the external environment. If a sender and receiver is within the same chip, communication can be done in one cycle by the hierarchical interconnect bus structure. Programming the processors and the interconnections are done at compile time. The board is interfaced to a Sun SPARCstation using the SBus. Video input and output is supported by the board and field buffers are used for buffering. Software tools for checking the board, running test programs at the assembly language level, and libraries for application development have been produced. Image processing applications are currently under development. The board is available for experimentation over the Internet. Further details are available from the project web page (

  1. Youpi: A Web-based Astronomical Image Processing Pipeline (United States)

    Monnerville, M.; Sémah, G.


    Youpi stands for “YOUpi is your processing PIpeline”. It is a portable, easy to use web application providing high level functionalities to perform data reduction on scientific FITS images. It is built on top of open source processing tools that are released to the community by Terapix, in order to organize your data on a computer cluster, to manage your processing jobs in real time and to facilitate teamwork by allowing fine-grain sharing of results and data. On the server side, Youpi is written in the Python programming language and uses the Django web framework. On the client side, Ajax techniques are used along with the Prototype and Javascript librairies.

  2. IBIS - A geographic information system based on digital image processing and image raster datatype (United States)

    Bryant, N. A.; Zobrist, A. L.


    IBIS (Image Based Information System) is a geographic information system which makes use of digital image processing techniques to interface existing geocoded data sets and information management systems with thematic maps and remotely sensed imagery. The basic premise is that geocoded data sets can be referenced to a raster scan that is equivalent to a grid cell data set. The first applications (St. Tammany Parish, Louisiana, and Los Angeles County) have been restricted to the design of a land resource inventory and analysis system. It is thought that the algorithms and the hardware interfaces developed will be readily applicable to other Landsat imagery.

  3. Automatic Road Pavement Assessment with Image Processing: Review and Comparison

    Directory of Open Access Journals (Sweden)

    Sylvie Chambon


    Full Text Available In the field of noninvasive sensing techniques for civil infrastructures monitoring, this paper addresses the problem of crack detection, in the surface of the French national roads, by automatic analysis of optical images. The first contribution is a state of the art of the image-processing tools applied to civil engineering. The second contribution is about fine-defect detection in pavement surface. The approach is based on a multi-scale extraction and a Markovian segmentation. Third, an evaluation and comparison protocol which has been designed for evaluating this difficult task—the road pavement crack detection—is introduced. Finally, the proposed method is validated, analysed, and compared to a detection approach based on morphological tools.

  4. In-Process Thermal Imaging of the Electron Beam Freeform Fabrication Process (United States)

    Taminger, Karen M.; Domack, Christopher S.; Zalameda, Joseph N.; Taminger, Brian L.; Hafley, Robert A.; Burke, Eric R.


    Researchers at NASA Langley Research Center have been developing the Electron Beam Freeform Fabrication (EBF3) metal additive manufacturing process for the past 15 years. In this process, an electron beam is used as a heat source to create a small molten pool on a substrate into which wire is fed. The electron beam and wire feed assembly are translated with respect to the substrate to follow a predetermined tool path. This process is repeated in a layer-wise fashion to fabricate metal structural components. In-process imaging has been integrated into the EBF3 system using a near-infrared (NIR) camera. The images are processed to provide thermal and spatial measurements that have been incorporated into a closed-loop control system to maintain consistent thermal conditions throughout the build. Other information in the thermal images is being used to assess quality in real time by detecting flaws in prior layers of the deposit. NIR camera incorporation into the system has improved the consistency of the deposited material and provides the potential for real-time flaw detection which, ultimately, could lead to the manufacture of better, more reliable components using this additive manufacturing process.

  5. Optimization of image processing algorithms on mobile platforms (United States)

    Poudel, Pramod; Shirvaikar, Mukul


    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  6. Microtomographic imaging in the process of bone modeling and simulation (United States)

    Mueller, Ralph


    Micro-computed tomography ((mu) CT) is an emerging technique to nondestructively image and quantify trabecular bone in three dimensions. Where the early implementations of (mu) CT focused more on technical aspects of the systems and required equipment not normally available to the general public, a more recent development emphasized practical aspects of micro- tomographic imaging. That system is based on a compact fan- beam type of tomograph, also referred to as desktop (mu) CT. Desk-top (mu) CT has been used extensively for the investigation of osteoporosis related health problems gaining new insight into the organization of trabecular bone and the influence of osteoporotic bone loss on bone architecture and the competence of bone. Osteoporosis is a condition characterized by excessive bone loss and deterioration in bone architecture. The reduced quality of bone increases the risk of fracture. Current imaging technologies do not allow accurate in vivo measurements of bone structure over several decades or the investigation of the local remodeling stimuli at the tissue level. Therefore, computer simulations and new experimental modeling procedures are necessary for determining the long-term effects of age, menopause, and osteoporosis on bone. Microstructural bone models allow us to study not only the effects of osteoporosis on the skeleton but also to assess and monitor the effectiveness of new treatment regimens. The basis for such approaches are realistic models of bone and a sound understanding of the underlying biological and mechanical processes in bone physiology. In this article, strategies for new approaches to bone modeling and simulation in the study and treatment of osteoporosis and age-related bone loss are presented. The focus is on the bioengineering and imaging aspects of osteoporosis research. With the introduction of desk-top (mu) CT, a new generation of imaging instruments has entered the arena allowing easy and relatively inexpensive access to

  7. Pharmacokinetics of the phage endolysin-based candidate drug SAL200 in monkeys and its appropriate intravenous dosing period. (United States)

    Jun, Soo Youn; Jung, Gi Mo; Yoon, Seong Jun; Youm, So Young; Han, Hyoung-Yun; Lee, Jong-Hwa; Kang, Sang Hyeon


    SAL200 is a new phage endolysin-based candidate drug for the treatment of staphylococcal infections. An intravenous administration study was conducted in monkeys to obtain pharmacokinetic information on SAL200 and to assess the safety of a short SAL200 dosing period (<1 week). Maximum serum drug concentrations and systemic SAL200 exposure were proportional to the dose and comparable in male and female monkeys. SAL200 was well tolerated, and no adverse events or laboratory abnormalities were detected after injection of a single dose of up to 80 mg/kg per day, or injection of multiple doses of up to 40 mg/kg per day. © 2016 John Wiley & Sons Australia, Ltd.

  8. Image processing in standing-wave fluorescence microscopy (United States)

    Krishnamurthi, Vijaykumar

    ) have to be first combined to generate a composite data set. This thesis describes the mathematical theory and procedure to combine the images obtained by the SWFM. Also, processing the combined data using a non-linear algorithm recovers the information lost in the gaps. This leads to improved resolution, both axial and transverse, after processing, and is demonstrated by the results shown in the thesis, from simulated as well as biological data.

  9. Image processing in radiology; Bildverarbeitung in der Radiologie

    Energy Technology Data Exchange (ETDEWEB)

    Dammann, F. [Tuebingen Univ. (Germany). Abt. fuer Radiologische Diagnost


    Medical imaging processing and analysis methods have significantly improved during recent years and are now being increasingly used in clinical applications. Preprocessing algorithms are used to influence image contrast and noise. Three-dimensional visualization techniques including volume rendering and virtual endoscopy are increasingly available to evaluate sectional imaging data sets. Registration techniques have been developed to merge different examination modalities. Structures of interest can be extracted from the image data sets by various segmentation methods. Segmented structures are used for automated quantification analysis as well as for three-dimensional therapy planning, simulation and intervention guidance, including medical modelling, virtual reality environments, surgical robots and navigation systems. These newly developed methods require specialized skills for the production and postprocessing of radiological imaging data as well as new definitions of the roles of the traditional specialities. The aim of this article is to give an overview of the state-of-the-art of medical imaging processing methods, practical implications for the ragiologist's daily work and future aspects. (orig.) [German] Die medizinische Bildverarbeitung hat in den vergangenen Jahren erhebliche Fortschritte erzielt. Sie findet jetzt - vorwiegend auf der Basis radiologischer Methoden - zunehmend Verwendung in der klinischen Praxis. Von der ersten Aufbereitung der digitalen Bilddaten bis zur Endverwendung ist eine Folge aufeinander aufbauender Arbeitsbereiche differenzierbar. Bei der Vorverarbeitung werden Filter zur Beeinflussung von Kontrast oder Rauschen eingesetzt. Mittels Segmentierung koennen interessierende Strukturen aus dem Bild herausgeloest werden. Solche Strukturen lassen sich mit automatisierten Quantifizierungsverfahren analysieren. Die Registrierung dient der anatomiegerechten Ueberlagerung unterschiedlicher oder gleicher Untersuchungsmethoden. Zur

  10. Economic contribution of participatory agroforestry program to poverty alleviation: a case from Sal forests, Bangladesh

    NARCIS (Netherlands)

    Islam, K.K.; Hoogstra, M.A.; Ullah, M.O.; Sato, N.


    In the Forest Department of Bangladesh, a Participatory Agroforestry Program (PAP) was initiated at a denuded Sal forests area to protect the forest resources and to alleviate poverty amongst the local poor population. We explored whether the PAP reduced poverty and what factors might be responsible

  11. Een onwrikbaar geloof in zijn gelijk : Sal Tas (1905-1976): journalist van de wereld

    NARCIS (Netherlands)

    de Vries, Tity

    Biography of the Dutch Sal Tas, activist, political writer and reporter/foreign correspondent in Paris of the newspaper Het Parool and the American non-communist left journal The New Leader during the 1950s. Tas was a controversial man with outspoken opinions. Radical left in his young years, later

  12. Plastic fats from sal, mango and palm oil by lipase catalyzed interesterification. (United States)

    Shankar Shetty, Umesha; Sunki Reddy, Yella Reddy; Khatoon, Sakina


    Speciality plastic fats with no trans fatty acids suitable for use in bakery and as vanaspati substitute were prepared by interesterification of blends of palm stearin (PSt) with sal and mango fats using Lipozyme TLIM lipase as catalyst. The blends containing PSt/sal or PSt/mango showed short melting range and hence are not suitable as bakery shortenings. Lipase catalysed interesterification extended the plasticity or melting range of all the blends. The blends containing higher proportion of PSt with sal fat (50/50) were harder having high solids at and above body temperature and hence cannot be used as bakery shortenings. The blends with PSt/sal (30-40/60-70) after interesterification showed melting profiles similar to those of commercial hydrogenated bakery fats. Similarly, the blends containing PSt/mango (30-40/60-70) after interesterification also showed melting profiles similar to those of commercial hydrogenated shortenings. The slip melting point and solidification characteristics also confirm the plastic nature of these samples. The improvement in plasticity after interesterification is due to formation of higher melting as well as lower melting triglycerides during lipase catalysed interesterification.

  13. Diferencial de salários da mão de obra terceirizada no Brasil

    Directory of Open Access Journals (Sweden)

    Guilherme Stein

    Full Text Available Resumo Este artigo compara os salários da mão de obra terceirizada no Brasil com os dos trabalhadores contratados diretamente pelas empresas. A comparação simples entre as remunerações médias dos dois grupos indica que os salários dos terceirizados são 17% inferiores, mas quando o diferencial é controlado pelo efeito fixo do trabalhador, a diferença cai para 3,6%. Além disso, as evidências apontam para uma grande heterogeneidade no diferencial salarial. Trabalhadores de ocupações como telemarketing têm o salário médio 8% inferior quando estão terceirizados. Por outro lado, ocupações como segurança e vigilância oferecem salários estatisticamente maiores, em média, para os terceirizados. As evidências indicam ainda que o diferencial desfavorável ao terceirizado apresentou um aumento entre 2007 e 2012 e uma redução a partir de então.

  14. Not an "Ugly American" : Sal Tas, a Dutch Reporter as Agent of the West in Africa

    NARCIS (Netherlands)

    de Vries, Tity; van Dongen, Luc; Roulin, Stephanie; Scott-Smith, Giles


    During the 1950s and 1960s Dutch Parool reporter Sal Tas (1905-1976) performed as a link between American liberal intellectuals and politicians and young African states, with his articles in The New Leader and with his activities for an American training and information center in Rome. Not being

  15. Image processing analysis of traditional Gestalt vision experiments (United States)

    McCann, John J.


    In the late 19th century, the Gestalt Psychology rebelled against the popular new science of Psychophysics. The Gestalt revolution used many fascinating visual examples to illustrate that the whole is greater than the sum of all the parts. Color constancy was an important example. The physical interpretation of sensations and their quantification by JNDs and Weber fractions were met with innumerable examples in which two 'identical' physical stimuli did not look the same. The fact that large changes in the color of the illumination failed to change color appearance in real scenes demanded something more than quantifying the psychophysical response of a single pixel. The debates continues today with proponents of both physical, pixel-based colorimetry and perceptual, image- based cognitive interpretations. Modern instrumentation has made colorimetric pixel measurement universal. As well, new examples of unconscious inference continue to be reported in the literature. Image processing provides a new way of analyzing familiar Gestalt displays. Since the pioneering experiments by Fergus Campbell and Land, we know that human vision has independent spatial channels and independent color channels. Color matching data from color constancy experiments agrees with spatial comparison analysis. In this analysis, simple spatial processes can explain the different appearances of 'identical' stimuli by analyzing the multiresolution spatial properties of their surrounds. Benary's Cross, White's Effect, the Checkerboard Illusion and the Dungeon Illusion can all be understood by the analysis of their low-spatial-frequency components. Just as with color constancy, these Gestalt images are most simply described by the analysis of spatial components. Simple spatial mechanisms account for the appearance of 'identical' stimuli in complex scenes. It does not require complex, cognitive processes to calculate appearances in familiar Gestalt experiments.

  16. Remote sensing models and methods for image processing

    CERN Document Server

    Schowengerdt, Robert A


    Remote sensing is a technology that engages electromagnetic sensors to measure and monitor changes in the earth's surface and atmosphere. Normally this is accomplished through the use of a satellite or aircraft. This book, in its 3rd edition, seamlessly connects the art and science of earth remote sensing with the latest interpretative tools and techniques of computer-aided image processing. Newly expanded and updated, this edition delivers more of the applied scientific theory and practical results that helped the previous editions earn wide acclaim and become classroom and industry standa

  17. Three Dimensional Digital Image Processing using Edge Detectors

    Directory of Open Access Journals (Sweden)

    John Schmeelk


    Full Text Available This paper provides an introduction to three dimensional image edge detection and its relationship to partial derivatives, convolutions and wavelets. We are especially addressing the notion of edge detection because it has far reaching applications in all areas of research to include medical research. A patient can be diagnosed as having an aneurysm by studying an angiogram. An angiogram is the visual view of the blood vessels whereby the edges are highlighted through the implementation of edge detectors. This process is completed through convolution, wavelets and matrix techniques. Some illustrations included will be vertical, horizontal, Sobel and wavelet edge detectors.

  18. Simulating SAL formation and aerosol size distribution during SAMUM-I

    KAUST Repository

    Khan, Basit Ali


    To understand the formation mechanisms of Saharan Air Layer (SAL), we combine model simulations and dust observations collected during the first stage of the Saharan Mineral Dust Experiment (SAMUM-I), which sampled dust events that extended from Morocco to Portugal, and investigated the spatial distribution and the microphysical, optical, chemical, and radiative properties of Saharan mineral dust. We employed the Weather Research Forecast model coupled with the Chemistry/Aerosol module (WRF-Chem) to reproduce the meteorological environment and spatial and size distributions of dust. The experimental domain covers northwest Africa including the southern Sahara, Morocco and part of the Atlantic Ocean with 5 km horizontal grid spacing and 51 vertical layers. The experiments were run from 20 May to 9 June 2006, covering the period of most intensive dust outbreaks. Comparisons of model results with available airborne and ground-based observations show that WRF-Chem reproduces observed meteorological fields as well as aerosol spatial distribution across the entire region and along the airplane\\'s tracks. We evaluated several aerosol uplift processes and found that orographic lifting, aerosol transport through the land/sea interface with steep gradients of meteorological characteristics, and interaction of sea breezes with the continental outflow are key mechanisms that form a surface-detached aerosol plume over the ocean. Comparisons of simulated dust size distributions with airplane and ground-based observations are generally good, but suggest that more detailed treatment of microphysics in the model is required to capture the full-scale effect of large aerosol particles.

  19. The effect of image processing on the detection of cancers in digital mammography. (United States)

    Warren, Lucy M; Given-Wilson, Rosalind M; Wallis, Matthew G; Cooke, Julie; Halling-Brown, Mark D; Mackenzie, Alistair; Chakraborty, Dev P; Bosmans, Hilde; Dance, David R; Young, Kenneth C


    OBJECTIVE. The objective of our study was to investigate the effect of image processing on the detection of cancers in digital mammography images. MATERIALS AND METHODS. Two hundred seventy pairs of breast images (both breasts, one view) were collected from eight systems using Hologic amorphous selenium detectors: 80 image pairs showed breasts containing subtle malignant masses; 30 image pairs, biopsy-proven benign lesions; 80 image pairs, simulated calcification clusters; and 80 image pairs, no cancer (normal). The 270 image pairs were processed with three types of image processing: standard (full enhancement), low contrast (intermediate enhancement), and pseudo-film-screen (no enhancement). Seven experienced observers inspected the images, locating and rating regions they suspected to be cancer for likelihood of malignancy. The results were analyzed using a jackknife-alternative free-response receiver operating characteristic (JAFROC) analysis. RESULTS. The detection of calcification clusters was significantly affected by the type of image processing: The JAFROC figure of merit (FOM) decreased from 0.65 with standard image processing to 0.63 with low-contrast image processing (p = 0.04) and from 0.65 with standard image processing to 0.61 with film-screen image processing (p = 0.0005). The detection of noncalcification cancers was not significantly different among the image-processing types investigated (p > 0.40). CONCLUSION. These results suggest that image processing has a significant impact on the detection of calcification clusters in digital mammography. For the three image-processing versions and the system investigated, standard image processing was optimal for the detection of calcification clusters. The effect on cancer detection should be considered when selecting the type of image processing in the future.

  20. Digital image database processing to simulate image formation in ideal lighting conditions of the human eye (United States)

    Castañeda-Santos, Jessica; Santiago-Alvarado, Agustin; Cruz-Félix, Angel S.; Hernández-Méndez, Arturo


    The pupil size of the human eye has a large effect in the image quality due to inherent aberrations. Several studies have been performed to calculate its size relative to the luminance as well as considering other factors, i.e., age, size of the adapting field and mono and binocular vision. Moreover, ideal lighting conditions are known, but software suited to our specific requirements, low cost and low computational consumption, in order to simulate radiation adaptation and image formation in the retina with ideal lighting conditions has not yet been developed. In this work, a database is created consisting of 70 photographs corresponding to the same scene with a fixed target at different times of the day. By using this database, characteristics of the photographs are obtained by measuring the luminance average initial threshold value of each photograph by means of an image histogram. Also, we present the implementation of a digital filter for both, image processing on the threshold values of our database and generating output images with the threshold values reported for the human eye in ideal cases. Some potential applications for this kind of filters may be used in artificial vision systems.

  1. Human movement analysis with image processing in real time (United States)

    Fauvet, Eric; Paindavoine, Michel; Cannard, F.


    In the field of the human sciences, a lot of applications needs to know the kinematic characteristics of the human movements Psycology is associating the characteristics with the control mechanism, sport and biomechariics are associating them with the performance of the sportman or of the patient. So the trainers or the doctors can correct the gesture of the subject to obtain a better performance if he knows the motion properties. Roherton's studies show the children motion evolution2 . Several investigations methods are able to measure the human movement But now most of the studies are based on image processing. Often the systems are working at the T.V. standard (50 frame per secund ). they permit only to study very slow gesture. A human operator analyses the digitizing sequence of the film manually giving a very expensive, especially long and unprecise operation. On these different grounds many human movement analysis systems were implemented. They consist of: - markers which are fixed to the anatomical interesting points on the subject in motion, - Image compression which is the art to coding picture data. Generally the compression Is limited to the centroid coordinates calculation tor each marker. These systems differ from one other in image acquisition and markers detection.

  2. Satellite Data Visualization, Processing and Mapping using VIIRS Imager Data (United States)

    Phyu, A. N.


    A satellite is a manmade machine that is launched into space and orbits the Earth. These satellites are used for various purposes for examples: Environmental satellites help us monitor and protect our environment; Navigation (GPS) satellites provides accurate time and position information: and Communication satellites allows us the interact with each other over long distances. Suomi NPP is part of the constellation of Joint Polar Satellite System (JPSS) fleet of satellites which is an Environmental satellite that carries the Visual Infrared Imaging Radiometer Suite (VIIRS) instrument. VIIRS is a scanning radiometer that takes high resolution images of the Earth. VIIRS takes visible, infrared and radiometric measurements of the land, oceans, atmosphere and cryosphere. These high resolution images provide information that helps weather prediction and environmental forecasting of extreme events such as forest fires, ice jams, thunder storms and hurricane. This project will describe how VIIRS instrument data is processed, mapped, and visualized using variety of software and application. It will focus on extreme events like Hurricane Sandy and demonstrate how to use the satellite to map the extent of a storm. Data from environmental satellites such as Suomi NPP-VIIRS is important for monitoring climate change, sea level rise, land surface temperature changes as well as extreme weather events.

  3. Retrieval of air quality information using image processing technique. (United States)

    Lim, H. S.; MatJafri, M. Z.; Abdullah, K.; Saleh, N. M.


    This paper presents and describes an approach to retrieve concentration of particulate matter of size less than 10- micron (PM10) from Landsat TM data over Penang Island. The objective of this study is test the feasibility of using Landsat TM for PM10 mapping using our proposed developed algorithm. The development of the algorithm was developed base on the aerosol characteristics in the atmosphere. PM10 measurements were collected using a DustTrak Aerosol Monitor 8520 simultaneously with the image acquisition. The station locations of the PM10 measurements were detemined using a hand held GPS. The digital numbers were extracted corresponding to the ground-truth locations for each band and then converted into radiance and reflectance values. The reflectance measured from the satellite [reflectance at the top of atmospheric, ρ(TOA)] was subtracted by the amount given by the surface reflectance to obtain the atmospheric reflectance. Then the atmospheric reflectance was related to the PM10 using regression analysis. The surface reflectance values were created using ACTOR2 image correction software in the PCI Geomatica 9.1.8 image processing software. The proposed developed algorithm produced high accuracy and also showed a good agreement (R =0.8406) between the measured and estimated PM10. This study indicates that it is feasible to use Landsat TM data for mapping PM10 using the proposed algorithm.


    Directory of Open Access Journals (Sweden)

    V. L. Kozlov


    Full Text Available The correlation processing of optical digital images of expert research objects is promising to improve the quality, reliability and representativeness of the research. The development of computer algorithms for expert investigations by using correlation analysis methods for solving such problems of criminology, as a comparison of color-tone image parameters impressions of seals and stamps, and measurement of the rifling profile trace of the barrel on the bullet is the purpose of the work. A method and software application for measurement of linear, angular and altitude characteristics of the profile (micro relief of the rifling traces of the barrel on the bullet for judicial-ballistic tests is developed. Experimental results testify to a high overall performance of the developed program application and confirm demanded accuracy of spent measurements. Technique and specialized program application for the comparison of color-tone image parameters impressions of seals and stamps, reflecting degree and character of painting substance distribution in strokes has been developed. It improves presentation and objectivity of tests, and also allows to reduce their carrying out terms. The technique of expert interpretation of correlation analysis results has been offered. Reliability of the received results has been confirmed by experimental researches and has been checked up by means of other methods.

  5. A method of camera calibration based on image processing (United States)

    Duan, Jin; Kong, Chuiliu; Zhang, Dan; Jing, Wenbo


    According to the principle of optical measurement, an effective and simple method to measure the distortion of CCD camera and lens is presented in this paper. The method is based on computer active vision and digital image processing technology. The radial distortion of camera lens is considered in the method, while the camera parameters such as the pixel interval and focus of camera are calibrated. The optoelectronic theodolite is used in our experiment system. The light spot can imaging in CCD camera from the theodolite. The position of the light spot should be changed without the camera's rotation, while the optoelectronic theodolite rotates an angle. All view reference points in the image are worked out by computing the angle between actual point and the optical center where the distortion can be ignored. The error correction parameters are computed, and then the camera parameters are calibrated. The sub-pixel subdivision method is used to improve the point detection precision in our method. The experiment result shows that our method is effective, simple and practical.


    Directory of Open Access Journals (Sweden)

    K. Sujatha


    Full Text Available Combustion quality in power station boilers plays an important role in minimizing the flue gas emissions. In the present work various intelligent schemes to infer the flue gas emissions by monitoring the flame colour at the furnace of the boiler are proposed here. Flame image monitoring involves capturing the flame video over a period of time with the measurement of various parameters like Carbon dioxide (CO2, excess oxygen (O2, Nitrogen dioxide (NOx, Sulphur dioxide (SOx and Carbon monoxide (CO emissions plus the flame temperature at the core of the fire ball, air/fuel ratio and the combustion quality. Higher the quality of combustion less will be the flue gases at the exhaust. The flame video was captured using an infrared camera. The flame video is then split up into the frames for further analysis. The video splitter is used for progressive extraction of the flame images from the video. The images of the flame are then pre-processed to reduce noise. The conventional classification and clustering techniques include the Euclidean distance classifier (L2 norm classifier. The intelligent classifier includes the Radial Basis Function Network (RBF, Back Propagation Algorithm (BPA and parallel architecture with RBF and BPA (PRBFBPA. The results of the validation are supported with the above mentioned performance measures whose values are in the optimal range. The values of the temperatures, combustion quality, SOx, NOx, CO, CO2 concentrations, air and fuel supplied corresponding to the images were obtained thereby indicating the necessary control action taken to increase or decrease the air supply so as to ensure complete combustion. In this work, by continuously monitoring the flame images, combustion quality was inferred (complete/partial/incomplete combustion and the air/fuel ratio can be automatically varied. Moreover in the existing set-up, measurements like NOx, CO and CO2 are inferred from the samples that are collected periodically or by

  7. A method of image multi-resolution processing based on FPGA + DSP architecture (United States)

    Peng, Xiaohan; Zhong, Sheng; Lu, Hongqiang


    In real-time image processing, with the improvement of resolution and frame rate of camera imaging, not only the requirement of processing capacity is improving, but also the requirement of the optimization of process is improving. With regards to the FPGA + DSP architecture image processing system, there are three common methods to overcome the challenge above. The first is using higher performance DSP. For example, DSP with higher core frequency or with more cores can be used. The second is optimizing the processing method, make the algorithm to accomplish the same processing results but spend less time. Last but not least, pre-processing in the FPGA can make the image processing more efficient. A method of multi-resolution pre-processing by FPGA based on FPGA + DSP architecture is proposed here. It takes advantage of built-in first in first out (FIFO) and external synchronous dynamic random access memory (SDRAM) to buffer the images which come from image detector, and provides down-sampled images or cut-down images for DSP flexibly and efficiently according to the request parameters sent by DSP. DSP can thus get the degraded image instead of the whole image to process, shortening the processing time and transmission time greatly. The method results in alleviating the burden of image processing of DSP and also solving the problem of single method of image resolution reduction cannot meet the requirements of image processing task of DSP.

  8. Three-dimensional imaging of atomic four-body processes

    CERN Document Server

    Schulz, M; Fischer, D; Kollmus, H; Madison, D H; Jones, S; Ullrich, J


    To understand the physical processes that occur in nature we need to obtain a solid concept about the 'fundamental' forces acting between pairs of elementary particles. it is also necessary to describe the temporal and spatial evolution of many mutually interacting particles under the influence of these forces. This latter step, known as the few-body problem, remains an important unsolved problem in physics. Experiments involving atomic collisions represent a useful testing ground for studying the few-body problem. For the single ionization of a helium atom by charged particle impact, kinematically complete experiments have been performed since 1969. The theoretical analysis of such experiments was thought to yield a complete picture of the basic features of the collision process, at least for large collision energies. These conclusions are, however, almost exclusively based on studies of restricted electron-emission geometries. We report three- dimensional images of the complete electron emission pattern for...

  9. Applied Fourier analysis from signal processing to medical imaging

    CERN Document Server

    Olson, Tim


    The first of its kind, this focused textbook serves as a self-contained resource for teaching from scratch the fundamental mathematics of Fourier analysis and illustrating some of its most current, interesting applications, including medical imaging and radar processing. Developed by the author from extensive classroom teaching experience, it provides a breadth of theory that allows students to appreciate the utility of the subject, but at as accessible a depth as possible. With myriad applications included, this book can be adapted to a one or two semester course in Fourier Analysis or serve as the basis for independent study. Applied Fourier Analysis assumes no prior knowledge of analysis from its readers, and begins by making the transition from linear algebra to functional analysis. It goes on to cover basic Fourier series and Fourier transforms before delving into applications in sampling and interpolation theory, digital communications, radar processing, medical i maging, and heat and wave equations. Fo...

  10. Image processing applied to measurement of particle size (United States)

    Vega, Fabio; Lasso, Willian; Torres, Cesar


    Five different types of aggregates have been analyzed, and the size of particles on samples immersed in distilled water as silicon dioxide, titanium dioxide, styrenes and crushed silica particles is made; an attempt at applying the digital image processing (DIP) technique to analyze the particle size, we developed a system of measures microparticles using a microscope, a CCD camera and acquisition software and video processing developed in MATLAB. These studies are combined with laser light using measurements by diffractometry and obtain calibration in the system implemented, in this work we achievement measurement particle size on the order of 4 to 6 micrometers. The study demonstrates that DIP is a fast, convenient, versatile, and accurate technique for particle size analysis; the limitations of implemented setup too will be discussed.

  11. Leveraging the Cloud for Robust and Efficient Lunar Image Processing (United States)

    Chang, George; Malhotra, Shan; Wolgast, Paul


    The Lunar Mapping and Modeling Project (LMMP) is tasked to aggregate lunar data, from the Apollo era to the latest instruments on the LRO spacecraft, into a central repository accessible by scientists and the general public. A critical function of this task is to provide users with the best solution for browsing the vast amounts of imagery available. The image files LMMP manages range from a few gigabytes to hundreds of gigabytes in size with new data arriving every day. Despite this ever-increasing amount of data, LMMP must make the data readily available in a timely manner for users to view and analyze. This is accomplished by tiling large images into smaller images using Hadoop, a distributed computing software platform implementation of the MapReduce framework, running on a small cluster of machines locally. Additionally, the software is implemented to use Amazon's Elastic Compute Cloud (EC2) facility. We also developed a hybrid solution to serve images to users by leveraging cloud storage using Amazon's Simple Storage Service (S3) for public data while keeping private information on our own data servers. By using Cloud Computing, we improve upon our local solution by reducing the need to manage our own hardware and computing infrastructure, thereby reducing costs. Further, by using a hybrid of local and cloud storage, we are able to provide data to our users more efficiently and securely. 12 This paper examines the use of a distributed approach with Hadoop to tile images, an approach that provides significant improvements in image processing time, from hours to minutes. This paper describes the constraints imposed on the solution and the resulting techniques developed for the hybrid solution of a customized Hadoop infrastructure over local and cloud resources in managing this ever-growing data set. It examines the performance trade-offs of using the more plentiful resources of the cloud, such as those provided by S3, against the bandwidth limitations such use

  12. Coherent radar imaging: Signal processing and statistical properties (United States)

    Woodman, Ronald F.


    The recently developed technique for imaging radar scattering irregularities has opened a great scientific potential for ionospheric and atmospheric coherent radars. These images are obtained by processing the diffraction pattern of the backscattered electromagnetic field at a finite number of sampling points on the ground. In this paper, we review the mathematical relationship between the statistical covariance of these samples, (? ?†), and that of the radiating object field to be imaged, (??†), in a self-contained and comprehensive way. It is shown that these matrices are related in a linear way by (??†) = aM(FF†)M†a*, where M is a discrete Fourier transform operator and a is a matrix operator representing the discrete and limited sampling of the field. The image, or brightness distribution, is the diagonal of (FF†). The equation can be linearly inverted only in special cases. In most cases, inversion algorithms which make use of a priori information or maximum entropy constraints must be used. A naive (biased) "image" can be estimated in a manner analogous to an optical camera by simply applying an inverse DFT operator to the sampled field ? and evaluating the average power of the elements of the resulting vector ?. Such a transformation can be obtained either digitally or in an analog way. For the latter we can use a Butler matrix consisting of properly interconnected transmission lines. The case of radar targets in the near field is included as a new contribution. This case involves an additional matrix operator b, which is an analog of an optical lens used to compensate for the curvature of the phase fronts of the backscattered field. This "focusing" can be done after the statistics have been obtained. The formalism is derived for brightness distributions representing total powers. However, the derived expressions have been extended to include "color" images for each of the frequency components of the sampled time series. The frequency filtering

  13. Making the PACS workstation a browser of image processing software: a feasibility study using inter-process communication techniques. (United States)

    Wang, Chunliang; Ritter, Felix; Smedby, Orjan


    To enhance the functional expandability of a picture archiving and communication systems (PACS) workstation and to facilitate the integration of third-part image-processing modules, we propose a browser-server style method. In the proposed solution, the PACS workstation shows the front-end user interface defined in an XML file while the image processing software is running in the background as a server. Inter-process communication (IPC) techniques allow an efficient exchange of image data, parameters, and user input between the PACS workstation and stand-alone image-processing software. Using a predefined communication protocol, the PACS workstation developer or image processing software developer does not need detailed information about the other system, but will still be able to achieve seamless integration between the two systems and the IPC procedure is totally transparent to the final user. A browser-server style solution was built between OsiriX (PACS workstation software) and MeVisLab (Image-Processing Software). Ten example image-processing modules were easily added to OsiriX by converting existing MeVisLab image processing networks. Image data transfer using shared memory added processing time while the other IPC methods cost 1-5 s in our experiments. The browser-server style communication based on IPC techniques is an appealing method that allows PACS workstation developers and image processing software developers to cooperate while focusing on different interests.



    S Jeyalakshmi; R Radha


    Plants, for their growth and survival, need 13 mineral nutrients. Toxicity or deficiency in any one or more of these nutrients affects the growth of plant and may even cause the destruction of the plant. Hence, a constant monitoring system for tracking the nutrient status in plants becomes essential for increase in production as well as quality of yield. A diagnostic system using digital image processing would diagnose the deficiency symptoms much earlier than human eyes could recognize. This...

  15. Diversification in an image retrieval system based on text and image processing

    Directory of Open Access Journals (Sweden)

    Adrian Iftene


    Full Text Available In this paper we present an image retrieval system created within the research project MUCKE (Multimedia and User Credibility Knowledge Extraction, a CHIST-ERA research project where UAIC{\\footnote{"Alexandru Ioan Cuza" University of Iasi}} is one of the partners{\\footnote{Together with Technical University from Wienna, Austria, CEA-LIST Institute from Paris, France and BILKENT University from Ankara, Turkey}}. Our discussion in this work will focus mainly on components that are part of our image retrieval system proposed in MUCKE, and we present the work done by the UAIC group. MUCKE incorporates modules for processing multimedia content in different modes and languages (like English, French, German and Romanian and UAIC is responsible with text processing tasks (for Romanian and English. One of the problems addressed by our work is related to search results diversification. In order to solve this problem, we first process the user queries in both languages and secondly, we create clusters of similar images.

  16. Application of Six Sigma methodology to a diagnostic imaging process. (United States)

    Taner, Mehmet Tolga; Sezen, Bulent; Atwat, Kamal M


    This paper aims to apply the Six Sigma methodology to improve workflow by eliminating the causes of failure in the medical imaging department of a private Turkish hospital. Implementation of the design, measure, analyse, improve and control (DMAIC) improvement cycle, workflow chart, fishbone diagrams and Pareto charts were employed, together with rigorous data collection in the department. The identification of root causes of repeat sessions and delays was followed by failure, mode and effect analysis, hazard analysis and decision tree analysis. The most frequent causes of failure were malfunction of the RIS/PACS system and improper positioning of patients. Subsequent to extensive training of professionals, the sigma level was increased from 3.5 to 4.2. The data were collected over only four months. Six Sigma's data measurement and process improvement methodology is the impetus for health care organisations to rethink their workflow and reduce malpractice. It involves measuring, recording and reporting data on a regular basis. This enables the administration to monitor workflow continuously. The improvements in the workflow under study, made by determining the failures and potential risks associated with radiologic care, will have a positive impact on society in terms of patient safety. Having eliminated repeat examinations, the risk of being exposed to more radiation was also minimised. This paper supports the need to apply Six Sigma and present an evaluation of the process in an imaging department.

  17. Remote Sensing Image Classification With Large-Scale Gaussian Processes (United States)

    Morales-Alvarez, Pablo; Perez-Suay, Adrian; Molina, Rafael; Camps-Valls, Gustau


    Current remote sensing image classification problems have to deal with an unprecedented amount of heterogeneous and complex data sources. Upcoming missions will soon provide large data streams that will make land cover/use classification difficult. Machine learning classifiers can help at this, and many methods are currently available. A popular kernel classifier is the Gaussian process classifier (GPC), since it approaches the classification problem with a solid probabilistic treatment, thus yielding confidence intervals for the predictions as well as very competitive results to state-of-the-art neural networks and support vector machines. However, its computational cost is prohibitive for large scale applications, and constitutes the main obstacle precluding wide adoption. This paper tackles this problem by introducing two novel efficient methodologies for Gaussian Process (GP) classification. We first include the standard random Fourier features approximation into GPC, which largely decreases its computational cost and permits large scale remote sensing image classification. In addition, we propose a model which avoids randomly sampling a number of Fourier frequencies, and alternatively learns the optimal ones within a variational Bayes approach. The performance of the proposed methods is illustrated in complex problems of cloud detection from multispectral imagery and infrared sounding data. Excellent empirical results support the proposal in both computational cost and accuracy.

  18. Postnatal brain development: Structural imaging of dynamic neurodevelopmental processes (United States)

    Jernigan, Terry L.; Baaré, William F. C.; Stiles, Joan; Madsen, Kathrine Skak


    After birth, there is striking biological and functional development of the brain’s fiber tracts as well as remodeling of cortical and subcortical structures. Behavioral development in children involves a complex and dynamic set of genetically guided processes by which neural structures interact constantly with the environment. This is a protracted process, beginning in the third week of gestation and continuing into early adulthood. Reviewed here are studies using structural imaging techniques, with a special focus on diffusion weighted imaging, describing age-related brain maturational changes in children and adolescents, as well as studies that link these changes to behavioral differences. Finally, we discuss evidence for effects on the brain of several factors that may play a role in mediating these brain–behavior associations in children, including genetic variation, behavioral interventions, and hormonal variation associated with puberty. At present longitudinal studies are few, and we do not yet know how variability in individual trajectories of biological development in specific neural systems map onto similar variability in behavioral trajectories. PMID:21489384

  19. Processing Ocean Images to Detect Large Drift Nets (United States)

    Veenstra, Tim


    A computer program processes the digitized outputs of a set of downward-looking video cameras aboard an aircraft flying over the ocean. The purpose served by this software is to facilitate the detection of large drift nets that have been lost, abandoned, or jettisoned. The development of this software and of the associated imaging hardware is part of a larger effort to develop means of detecting and removing large drift nets before they cause further environmental damage to the ocean and to shores on which they sometimes impinge. The software is capable of near-realtime processing of as many as three video feeds at a rate of 30 frames per second. After a user sets the parameters of an adjustable algorithm, the software analyzes each video stream, detects any anomaly, issues a command to point a high-resolution camera toward the location of the anomaly, and, once the camera has been so aimed, issues a command to trigger the camera shutter. The resulting high-resolution image is digitized, and the resulting data are automatically uploaded to the operator s computer for analysis.

  20. Image processing for identification and quantification of filamentous bacteria in in situ acquired images. (United States)

    Dias, Philipe A; Dunkel, Thiemo; Fajado, Diego A S; Gallegos, Erika de León; Denecke, Martin; Wiedemann, Philipp; Schneider, Fabio K; Suhr, Hajo


    In the activated sludge process, problems of filamentous bulking and foaming can occur due to overgrowth of certain filamentous bacteria. Nowadays, these microorganisms are typically monitored by means of light microscopy, commonly combined with staining techniques. As drawbacks, these methods are susceptible to human errors, subjectivity and limited by the use of discontinuous microscopy. The in situ microscope appears as a suitable tool for continuous monitoring of filamentous bacteria, providing real-time examination, automated analysis and eliminating sampling, preparation and transport of samples. In this context, a proper image processing algorithm is proposed for automated recognition and measurement of filamentous objects. This work introduces a method for real-time evaluation of images without any staining, phase-contrast or dilution techniques, differently from studies present in the literature. Moreover, we introduce an algorithm which estimates the total extended filament length based on geodesic distance calculation. For a period of twelve months, samples from an industrial activated sludge plant were weekly collected and imaged without any prior conditioning, replicating real environment conditions. Trends of filament growth rate-the most important parameter for decision making-are correctly identified. For reference images whose filaments were marked by specialists, the algorithm correctly recognized 72 % of the filaments pixels, with a false positive rate of at most 14 %. An average execution time of 0.7 s per image was achieved. Experiments have shown that the designed algorithm provided a suitable quantification of filaments when compared with human perception and standard methods. The algorithm's average execution time proved its suitability for being optimally mapped into a computational architecture to provide real-time monitoring.

  1. Real-time image-processing algorithm for markerless tumour tracking using X-ray fluoroscopic imaging. (United States)

    Mori, S


    To ensure accuracy in respiratory-gating treatment, X-ray fluoroscopic imaging is used to detect tumour position in real time. Detection accuracy is strongly dependent on image quality, particularly positional differences between the patient and treatment couch. We developed a new algorithm to improve the quality of images obtained in X-ray fluoroscopic imaging and report the preliminary results. Two oblique X-ray fluoroscopic images were acquired using a dynamic flat panel detector (DFPD) for two patients with lung cancer. The weighting factor was applied to the DFPD image in respective columns, because most anatomical structures, as well as the treatment couch and port cover edge, were aligned in the superior-inferior direction when the patient lay on the treatment couch. The weighting factors for the respective columns were varied until the standard deviation of the pixel values within the image region was minimized. Once the weighting factors were calculated, the quality of the DFPD image was improved by applying the factors to multiframe images. Applying the image-processing algorithm produced substantial improvement in the quality of images, and the image contrast was increased. The treatment couch and irradiation port edge, which were not related to a patient's position, were removed. The average image-processing time was 1.1 ms, showing that this fast image processing can be applied to real-time tumour-tracking systems. These findings indicate that this image-processing algorithm improves the image quality in patients with lung cancer and successfully removes objects not related to the patient. Our image-processing algorithm might be useful in improving gated-treatment accuracy.

  2. iMAGE cloud: medical image processing as a service for regional healthcare in a hybrid cloud environment. (United States)

    Liu, Li; Chen, Weiping; Nie, Min; Zhang, Fengjuan; Wang, Yu; He, Ailing; Wang, Xiaonan; Yan, Gen


    To handle the emergence of the regional healthcare ecosystem, physicians and surgeons in various departments and healthcare institutions must process medical images securely, conveniently, and efficiently, and must integrate them with electronic medical records (EMRs). In this manuscript, we propose a software as a service (SaaS) cloud called the iMAGE cloud. A three-layer hybrid cloud was created to provide medical image processing services in the smart city of Wuxi, China, in April 2015. In the first step, medical images and EMR data were received and integrated via the hybrid regional healthcare network. Then, traditional and advanced image processing functions were proposed and computed in a unified manner in the high-performance cloud units. Finally, the image processing results were delivered to regional users using the virtual desktop infrastructure (VDI) technology. Security infrastructure was also taken into consideration. Integrated information query and many advanced medical image processing functions-such as coronary extraction, pulmonary reconstruction, vascular extraction, intelligent detection of pulmonary nodules, image fusion, and 3D printing-were available to local physicians and surgeons in various departments and healthcare institutions. Implementation results indicate that the iMAGE cloud can provide convenient, efficient, compatible, and secure medical image processing services in regional healthcare networks. The iMAGE cloud has been proven to be valuable in applications in the regional healthcare system, and it could have a promising future in the healthcare system worldwide.

  3. From acoustic segmentation to language processing: evidence from optical imaging

    Directory of Open Access Journals (Sweden)

    Hellmuth Obrig


    Full Text Available During language acquisition in infancy and when learning a foreign language, the segmentation of the auditory stream into words and phrases is a complex process. Intuitively, learners use ‘anchors’ to segment the acoustic speech stream into meaningful units like words and phrases. Regularities on a segmental (e.g., phonological or suprasegmental (e.g., prosodic level can provide such anchors. Regarding the neuronal processing of these two kinds of linguistic cues a left hemispheric dominance for segmental and a right hemispheric bias for suprasegmental information has been reported in adults. Though lateralization is common in a number of higher cognitive functions, its prominence in language may also be a key to understanding the rapid emergence of the language network in infants and the ease at which we master our language in adulthood. One question here is whether the hemispheric lateralization is driven by linguistic input per se or whether non-linguistic, especially acoustic factors, ‘guide’ the lateralization process. Methodologically, fMRI provides unsurpassed anatomical detail for such an enquiry. However, instrumental noise, experimental constraints and interference with EEG assessment limit its applicability, pointedly in infants and also when investigating the link between auditory and linguistic processing. Optical methods have the potential to fill this gap. Here we review a number of recent studies using optical imaging to investigate hemispheric differences during segmentation and basic auditory feature analysis in language development.

  4. A giant polyaluminum species S-Al32 and two aluminum polyoxocations involving coordination by sulfate ions S-Al32 and S-K-Al13. (United States)

    Sun, Zhong; Wang, Hui; Tong, Honggeer; Sun, Shaofan


    The giant polyaluminum species [Al32O8(OH)60(H2O)28(SO4)2](16+) (S-Al32) and [Al13O4(OH)25(H2O)10(SO4)](4+) (S-K-Al13) [S means that sulfate ions take part in coordination of the aluminum polycation; K represents the Keggin structure] were obtained in the structures of [Al32O8(OH)60(H2O)28(SO4)2][SO4]7[Cl]2·30H2O and [Al13O4(OH)25(H2O)10(SO4)]4[SO4]8·20H2O, respectively. They are the first two aluminum polyoxocations coordinated by sulfate ions. The "core-shell" structure of S-Al32 is similar to that of Al30, but the units are linked by two [Al(OH)2(H2O)3(SO4)](-) groups with replacement of four η(1)-H2O molecules. The structure of S-K-Al13 is similar to the well-known structure of ε-K-Al13, but the units are linked by two (SO4(2-))0.5 with replacement of a H3O(+) ion. It was shown that strong interaction exists between the polyoxocations and counterions. On the basis of their structural features and preparation conditions, a formation and evolution mechanism (from ε-K-Al13 to S-K-Al13 and S-Al32) has been proposed. A local basification degree symmetrical equalization principle was extracted based on a comparison of the calculated results of the local basification degree for each central Al(3+) ion included in a polycation. They can be used to explain how the two aluminum species are formed and evolved and why the sulfate ions can coordinate to them and to predict where the OH-bridging positions will be upon further hydrolysis.


    Directory of Open Access Journals (Sweden)

    A. H. Ahrari


    Full Text Available Multimodal remote sensing approach is based on merging different data in different portions of electromagnetic radiation that improves the accuracy in satellite image processing and interpretations. Remote Sensing Visible and thermal infrared bands independently contain valuable spatial and spectral information. Visible bands make enough information spatially and thermal makes more different radiometric and spectral information than visible. However low spatial resolution is the most important limitation in thermal infrared bands. Using satellite image fusion, it is possible to merge them as a single thermal image that contains high spectral and spatial information at the same time. The aim of this study is a performance assessment of thermal and visible image fusion quantitatively and qualitatively with wavelet transform and different filters. In this research, wavelet algorithm (Haar and different decomposition filters (mean.linear,ma,min and rand for thermal and panchromatic bands of Landast8 Satellite were applied as shortwave and longwave fusion method . Finally, quality assessment has been done with quantitative and qualitative approaches. Quantitative parameters such as Entropy, Standard Deviation, Cross Correlation, Q Factor and Mutual Information were used. For thermal and visible image fusion accuracy assessment, all parameters (quantitative and qualitative must be analysed with respect to each other. Among all relevant statistical factors, correlation has the most meaningful result and similarity to the qualitative assessment. Results showed that mean and linear filters make better fused images against the other filters in Haar algorithm. Linear and mean filters have same performance and there is not any difference between their qualitative and quantitative results.

  6. Results from PIXON-Processed HRC Images of Pluto (United States)

    Young, E. F.; Buie, M. W.; Young, L. A.


    We examine the 384 dithered images of Pluto and Charon taken with the Hubble Space Telescope's High Resolution Camera (HRC) under program GO-9391. We have deconvolved the individual images with synthetic point spread functions (PSF) generated with TinyTim v6.3 using PIXON processing (Puetter and Yahil 1999). We reconstruct a surface albedo map of Pluto using a backprojection algorithm. At present, this algorithm does not include Hapke phase function or backscattering parameters. We compare this albedo map to earlier maps based on HST and mutual event observations (e.g., Stern et al. 1997, Young et al. 2001), looking for changes in albedo distribution and B-V color distribution. Pluto's volatile surface ices are closely tied to its atmospheric column abundance, which has doubled in the interval between 1989 and 2002 (Sicardy et al. 2003, Elliot et al. 2003). A slight rise (1.5 K) in the temperature of nitrogen ice would support the thicker atmosphere. We examine the albedo distribution in the context of Pluto's changing atmosphere. Finally, a side effect of the PIXON processing is that we are better able to search for additional satellites in the Pluto-Charon system. We find no satellites within a 12 arcsec radius of Pluto brighter than a 5-sigma upper limit of B=25.9. In between Pluto and Charon this upper limit is degraded to B=22.8 within one Rp of Pluto's surface, improving to B=25.1 at 10 Rp (Charon's semimajor axis). This research was supported by a grant from NASA's Planetary Astronomy Program (NAG5-12516) and STScI grant GO-9391. Elliot, J.L., and 28 co-authors (2003), ``The recent expansion of Pluto's atmosphere," Nature 424, 165-168. R. C. Puetter and A. Yahil (1999), ``The Pixon Method of Image Reconstruction" in Astronomical Data Analysis Software and Systems VIII, D. M. Mehringer, R. L. Plante & D. A. Roberts, eds., ASP Conference Series, 172, pp. 307-316. Sicardy, B. and 40 co-authors (2003), ``Large changes in Pluto's atmosphere as revealed by recent

  7. I'm sorry to say, but your understanding of image processing fundamentals is absolutely wrong


    Diamant, Emanuel


    In this paper, I have proposed a few ideas that are entirely new and therefore might look suspicious. All the novelties come as a natural extension of a new definition of information that is sequentially applied to various aspects of image processing. The most important innovation is positing information image processing as the prime mode of image processing (in contrast to traditionally dominant data image processing). The next novelty is the dissociation between physical and semantic inform...

  8. Developing image processing meta-algorithms with data mining of multiple metrics. (United States)

    Leung, Kelvin; Cunha, Alexandre; Toga, A W; Parker, D Stott


    People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation.

  9. Digital image analysis in breast pathology-from image processing techniques to artificial intelligence. (United States)

    Robertson, Stephanie; Azizpour, Hossein; Smith, Kevin; Hartman, Johan


    Breast cancer is the most common malignant disease in women worldwide. In recent decades, earlier diagnosis and better adjuvant therapy have substantially improved patient outcome. Diagnosis by histopathology has proven to be instrumental to guide breast cancer treatment, but new challenges have emerged as our increasing understanding of cancer over the years has revealed its complex nature. As patient demand for personalized breast cancer therapy grows, we face an urgent need for more precise biomarker assessment and more accurate histopathologic breast cancer diagnosis to make better therapy decisions. The digitization of pathology data has opened the door to faster, more reproducible, and more precise diagnoses through computerized image analysis. Software to assist diagnostic breast pathology through image processing techniques have been around for years. But recent breakthroughs in artificial intelligence (AI) promise to fundamentally change the way we detect and treat breast cancer in the near future. Machine learning, a subfield of AI that applies statistical methods to learn from data, has seen an explosion of interest in recent years because of its ability to recognize patterns in data with less need for human instruction. One technique in particular, known as deep learning, has produced groundbreaking results in many important problems including image classification and speech recognition. In this review, we will cover the use of AI and deep learning in diagnostic breast pathology, and other recent developments in digital image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Quantitative immunocytochemistry using an image analyzer. I. Hardware evaluation, image processing, and data analysis. (United States)

    Mize, R R; Holdefer, R N; Nabors, L B


    In this review we describe how video-based image analysis systems are used to measure immunocytochemically labeled tissue. The general principles underlying hardware and software procedures are emphasized. First, the characteristics of image analyzers are described, including the densitometric measure, spatial resolution, gray scale resolution, dynamic range, and acquisition and processing speed. The errors produced by these instruments are described and methods for correcting or reducing the errors are discussed. Methods for evaluating image analyzers are also presented, including spatial resolution, photometric transfer function, short- and long-term temporal variability, and measurement error. The procedures used to measure immunocytochemically labeled cells and fibers are then described. Immunoreactive profiles are imaged and enhanced using an edge sharpening operator and then extracted using segmentation, a procedure which captures all labeled profiles above a threshold gray level. Binary operators, including erosion and dilation, are applied to separate objects and to remove artifacts. The software then automatically measures the geometry and optical density of the extracted profiles. The procedures are rapid and efficient methods for measuring simultaneously the position, geometry, and labeling intensity of immunocytochemically labeled tissue, including cells, fibers, and whole fields. A companion paper describes non-biological standards we have developed to estimate antigen concentration from the optical density produced by antibody labeling (Nabors et al., 1988).

  11. Image processing of liver computed tomography angiographic (CTA) images for laser induced thermotherapy (LITT) planning (United States)

    Li, Yue; Gao, Xiang; Tang, Qingyu; Gao, Shangkai


    Analysis of patient images is highly desired for simulating and planning the laser-induced thermotherapy (LITT) to study the cooling effect of big vessels around tumors during the procedure. In this paper, we present an image processing solution for simulating and planning LITT on liver cancer using computed tomography angiography (CTA) images. This includes first performing a 3D anisotropic filtering on the data to remove noise. The liver region is then segmented with a level sets based contour tracking method. A 3D level sets based surface evolution driven by boundary statistics is then used to segment the surfaces of vessels and tumors. Then the medial lines of vessels were extracted by a thinning algorithm. Finally the vessel tree is found on the thinning result, by first constructing a shortest path spanning tree by Dijkstra algorithm and then pruning the unnecessary branches. From the segmentation and vessel skeletonization results, important geometric parameters of the vessels and tumors are calculated for simulation and surgery planning. The proposed methods was applied to a patient's image and the result is shown.

  12. Visual processing in rapid-chase systems: Image processing, attention, and awareness

    Directory of Open Access Journals (Sweden)

    Thomas eSchmidt


    Full Text Available Visual stimuli can be classified so rapidly that their analysis may be based on a single sweep of feedforward processing through the visuomotor system. Behavioral criteria for feedforward processing can be evaluated in response priming tasks where speeded pointing or keypress responses are performed towards target stimuli which are preceded by prime stimuli. We apply this method to several classes of complex stimuli. 1 When participants classify natural images into animals or non-animals, the time course of their pointing responses indicates that prime and target signals remain strictly sequential throughout all processing stages, meeting stringent behavioral criteria for feedforward processing (rapid-chase criteria. 2 Such priming effects are boosted by selective visual attention for positions, shapes, and colors, in a way consistent with bottom-up enhancement of visuomotor processing, even when primes cannot be consciously identified. 3 Speeded processing of phobic images is observed in participants specifically fearful of spiders or snakes, suggesting enhancement of feedforward processing by long-term perceptual learning. 4 When the perceived brightness of primes in complex displays is altered by means of illumination or transparency illusions, priming effects in speeded keypress responses can systematically contradict subjective brightness judgments, such that one prime appears brighter than the other but activates motor responses as if it was darker. We propose that response priming captures the output of the first feedforward pass of visual signals through the visuomotor system, and that this output lacks some characteristic features of more elaborate, recurrent processing. This way, visuomotor measures may become dissociated from several aspects of conscious vision. We argue that "fast" visuomotor measures predominantly driven by feedforward processing should supplement "slow" psychophysical measures predominantly based on visual

  13. Evaluation of Lip Prints on Different Supports Using a Batch Image Processing Algorithm and Image Superimposition. (United States)

    Herrera, Lara Maria; Fernandes, Clemente Maia da Silva; Serra, Mônica da Costa


    This study aimed to develop and to assess an algorithm to facilitate lip print visualization, and to digitally analyze lip prints on different supports, by superimposition. It also aimed to classify lip prints according to sex. A batch image processing algorithm was developed, which facilitated the identification and extraction of information about lip grooves. However, it performed better for lip print images with a uniform background. Paper and glass slab allowed more correct identifications than glass and the both sides of compact disks. There was no significant difference between the type of support and the amount of matching structures located in the middle area of the lower lip. There was no evidence of association between types of lip grooves and sex. Lip groove patterns of type III and type I were the most common for both sexes. The development of systems for lip print analysis is necessary, mainly concerning digital methods. © 2017 American Academy of Forensic Sciences.

  14. Los Jardines del Salón de Palencia: Un espacio entre la naturaleza y la cultura


    Alario Trigueros, María Teresa


    Los Jardines del Salón constituyen el más antiguo espacio verde de la ciudad de Palencia que ha pervivido hasta la actualidad. Nacidos en 1837, a comienzos del período isabelino sobre los terrenos de las huertas del extinto convento del Carmen, siguiendo el habitual modelo de "salón" que se dio en muchas las ciudades españolas, como todo espacio vivo han sido sometidos a lo largo de casi dos siglos de existencia a diversas ampliaciones y modificaciones en su trazado original. The Salón Gar...

  15. Photogrammetric Processing of Apollo 15 Metric Camera Oblique Images (United States)

    Edmundson, K. L.; Alexandrov, O.; Archinal, B. A.; Becker, K. J.; Becker, T. L.; Kirk, R. L.; Moratto, Z. M.; Nefian, A. V.; Richie, J. O.; Robinson, M. S.


    The integrated photogrammetric mapping system flown on the last three Apollo lunar missions (15, 16, and 17) in the early 1970s incorporated a Metric (mapping) Camera, a high-resolution Panoramic Camera, and a star camera and laser altimeter to provide support data. In an ongoing collaboration, the U.S. Geological Survey's Astrogeology Science Center, the Intelligent Robotics Group of the NASA Ames Research Center, and Arizona State University are working to achieve the most complete cartographic development of Apollo mapping system data into versatile digital map products. These will enable a variety of scientific/engineering uses of the data including mission planning, geologic mapping, geophysical process modelling, slope dependent correction of spectral data, and change detection. Here we describe efforts to control the oblique images acquired from the Apollo 15 Metric Camera.

  16. Enabling customer self service through image processing on mobile devices (United States)

    Kliche, Ingmar; Hellmann, Sascha; Kreutel, Jörn


    Our paper will outline the results of a research project that employs image processing for the automatic diagnosis of technical devices whose internal state is communicated through visual displays. In particular, we developed a method for detecting exceptional states of retail wireless routers, analysing the state and blinking behaviour of the LEDs that make up most routers' user interface. The method was made configurable by means of abstracting away from a particular device's display properties, thus being able to analyse a whole range of different devices whose displays are covered by our abstraction. The method of analysis and its configuration mechanism were implemented as a native mobile application for the Android Platform. It employs the local camera of mobile devices for capturing a router's state, and uses overlaid visual hints for guiding the user toward that perspective from where an analysis is possible.

  17. Counterfeit Electronics Detection Using Image Processing and Machine Learning (United States)

    Asadizanjani, Navid; Tehranipoor, Mark; Forte, Domenic


    Counterfeiting is an increasing concern for businesses and governments as greater numbers of counterfeit integrated circuits (IC) infiltrate the global market. There is an ongoing effort in experimental and national labs inside the United States to detect and prevent such counterfeits in the most efficient time period. However, there is still a missing piece to automatically detect and properly keep record of detected counterfeit ICs. Here, we introduce a web application database that allows users to share previous examples of counterfeits through an online database and to obtain statistics regarding the prevalence of known defects. We also investigate automated techniques based on image processing and machine learning to detect different physical defects and to determine whether or not an IC is counterfeit.

  18. Parallel Digital Watermarking Process on Ultrasound Medical Images in Multicores Environment

    Directory of Open Access Journals (Sweden)

    Hui Liang Khor


    Full Text Available With the advancement of technology in communication network, it facilitated digital medical images transmitted to healthcare professionals via internal network or public network (e.g., Internet, but it also exposes the transmitted digital medical images to the security threats, such as images tampering or inserting false data in the images, which may cause an inaccurate diagnosis and treatment. Medical image distortion is not to be tolerated for diagnosis purposes; thus a digital watermarking on medical image is introduced. So far most of the watermarking research has been done on single frame medical image which is impractical in the real environment. In this paper, a digital watermarking on multiframes medical images is proposed. In order to speed up multiframes watermarking processing time, a parallel watermarking processing on medical images processing by utilizing multicores technology is introduced. An experiment result has shown that elapsed time on parallel watermarking processing is much shorter than sequential watermarking processing.

  19. Development of a Reference Image Collection Library for Histopathology Image Processing, Analysis and Decision Support Systems Research. (United States)

    Kostopoulos, Spiros; Ravazoula, Panagiota; Asvestas, Pantelis; Kalatzis, Ioannis; Xenogiannopoulos, George; Cavouras, Dionisis; Glotsos, Dimitris


    Histopathology image processing, analysis and computer-aided diagnosis have been shown as effective assisting tools towards reliable and intra-/inter-observer invariant decisions in traditional pathology. Especially for cancer patients, decisions need to be as accurate as possible in order to increase the probability of optimal treatment planning. In this study, we propose a new image collection library (HICL-Histology Image Collection Library) comprising 3831 histological images of three different diseases, for fostering research in histopathology image processing, analysis and computer-aided diagnosis. Raw data comprised 93, 116 and 55 cases of brain, breast and laryngeal cancer respectively collected from the archives of the University Hospital of Patras, Greece. The 3831 images were generated from the most representative regions of the pathology, specified by an experienced histopathologist. The HICL Image Collection is free for access under an academic license at . Potential exploitations of the proposed library may span over a board spectrum, such as in image processing to improve visualization, in segmentation for nuclei detection, in decision support systems for second opinion consultations, in statistical analysis for investigation of potential correlations between clinical annotations and imaging findings and, generally, in fostering research on histopathology image processing and analysis. To the best of our knowledge, the HICL constitutes the first attempt towards creation of a reference image collection library in the field of traditional histopathology, publicly and freely available to the scientific community.

  20. Real time polarization sensor image processing on an embedded FPGA/multi-core DSP system (United States)

    Bednara, Marcus; Chuchacz-Kowalczyk, Katarzyna


    Most embedded image processing SoCs available on the market are highly optimized for typical consumer applications like video encoding/decoding, motion estimation or several image enhancement processes as used in DSLR or digital video cameras. For non-consumer applications, on the other hand, optimized embedded hardware is rarely available, so often PC based image processing systems are used. We show how a real time capable image processing system for a non-consumer application - namely polarization image data processing - can be efficiently implemented on an FPGA and multi-core DSP based embedded hardware platform.