WorldWideScience

Sample records for deep f814w images

  1. VizieR Online Data Catalog: Ultradiffuse galaxies found in deep HST images of HFF (Lee+, 2017)

    Science.gov (United States)

    Lee, M. G.; Kang, J.; Lee, J. H.; Jang in, S.

    2018-03-01

    Abell S1063 and Abell 2744 are located at redshift z=0.348 and z=0.308, respectively, so their HST fields cover a relatively large fraction of each cluster. They are part of the target galaxy clusters in the Hubble Frontier Fields (HFF) Program, for which deep Hubble Space Telescope (HST) images are available (Lotz+ 2017ApJ...837...97L). We used ACS/F814W(I) and WFC3/F105W(Y) images for Abell S1063 and Abell 2744 in the HFF. The effective wavelengths of the F814W and F105W filters for the redshifts of Abell S1063 and Abell 2744 (6220 and 8030Å) correspond approximately to SDSS r' and Cousins I (or SDSS i') in the rest frame, respectively. Figure 1 display color images of the HST fields for Abell S1063 and Abell 2744. In this study we adopt the cosmological parameters H0=73km/s/Mpc, ΩM=0.27, and ΩΛ=0.73. For these parameters, luminosity distance moduli of Abell S1063 and Abell 2744 are (m-M)0=41.25 (d=1775Mpc) and 40.94 (d=1540Mpc), and angular diameter distances are 978 and 901Mpc, respectively. (5 data files).

  2. ACS/WFC Sky Flats from Frontier Fields Imaging

    Science.gov (United States)

    Mack, J.; Lucas, R. A.; Grogin, N. A.; Bohlin, R. C.; Koekemoer, A. M.

    2018-04-01

    Parallel imaging data from the HST Frontier Fields campaign (Lotz et al. 2017) have been used to compute sky flats for the ACS/WFC detector in order to verify the accuracy of the current set of flat field reference files. By masking sources and then co-adding many deep frames, the F606W and F814W filters have enough combined background signal that from Poisson statistics are efficiency tracks the thickness of the two WFC chips. Observations of blue and red calibration standards measured at various positions on the detector (Bohlin et al. 2017) confirm the fidelity of the F814W flat, with aperture photometry consistent to 1% across the FOV, regardless of spectral type. At bluer wavelengths, the total sky background is substantially lower, and the F435W sky flat shows a combination of both flat errors and detector artifacts. Aperture photometry of the red standard star shows a maximum deviation of 1.4% across the array in this filter. Larger residuals up to 2.5% are found for the blue standard, suggesting that the spatial sensitivity in F435W depends on spectral type.

  3. THE 2012 HUBBLE ULTRA DEEP FIELD (UDF12): OBSERVATIONAL OVERVIEW

    Energy Technology Data Exchange (ETDEWEB)

    Koekemoer, Anton M. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Ellis, Richard S.; Schenker, Matthew A. [Department of Astrophysics, California Institute of Technology, MS 249-17, Pasadena, CA 91125 (United States); McLure, Ross J.; Dunlop, James S.; Bowler, Rebecca A. A.; Rogers, Alexander B.; Curtis-Lake, Emma; Cirasuolo, Michele; Wild, V.; Targett, T. [Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ (United Kingdom); Robertson, Brant E.; Schneider, Evan; Stark, Daniel P. [Department of Astronomy and Steward Observatory, University of Arizona, Tucson, AZ 85721 (United States); Ono, Yoshiaki; Ouchi, Masami [Institute for Cosmic Ray Research, University of Tokyo, Kashiwa City, Chiba 277-8582 (Japan); Charlot, Stephane [UPMC-CNRS, UMR7095, Institut d' Astrophysique de Paris, F-75014, Paris (France); Furlanetto, Steven R. [Department of Physics and Astronomy, University of California, Los Angeles, CA 90095 (United States)

    2013-11-01

    We present the 2012 Hubble Ultra Deep Field campaign (UDF12), a large 128 orbit Cycle 19 Hubble Space Telescope program aimed at extending previous Wide Field Camera 3 (WFC3)/IR observations of the UDF by quadrupling the exposure time in the F105W filter, imaging in an additional F140W filter, and extending the F160W exposure time by 50%, as well as adding an extremely deep parallel field with the Advanced Camera for Surveys (ACS) in the F814W filter with a total exposure time of 128 orbits. The principal scientific goal of this project is to determine whether galaxies reionized the universe; our observations are designed to provide a robust determination of the star formation density at z ∼> 8, improve measurements of the ultraviolet continuum slope at z ∼ 7-8, facilitate the construction of new samples of z ∼ 9-10 candidates, and enable the detection of sources up to z ∼ 12. For this project we committed to combining these and other WFC3/IR imaging observations of the UDF area into a single homogeneous dataset to provide the deepest near-infrared observations of the sky. In this paper we present the observational overview of the project and describe the procedures used in reducing the data as well as the final products that were produced. We present the details of several special procedures that we implemented to correct calibration issues in the data for both the WFC3/IR observations of the main UDF field and our deep 128 orbit ACS/WFC F814W parallel field image, including treatment for persistence, correction for time-variable sky backgrounds, and astrometric alignment to an accuracy of a few milliarcseconds. We release the full, combined mosaics comprising a single, unified set of mosaics of the UDF, providing the deepest near-infrared blank-field view of the universe currently achievable, reaching magnitudes as deep as AB ∼ 30 mag in the near-infrared, and yielding a legacy dataset on this field.

  4. THE 2012 HUBBLE ULTRA DEEP FIELD (UDF12): OBSERVATIONAL OVERVIEW

    International Nuclear Information System (INIS)

    Koekemoer, Anton M.; Ellis, Richard S.; Schenker, Matthew A.; McLure, Ross J.; Dunlop, James S.; Bowler, Rebecca A. A.; Rogers, Alexander B.; Curtis-Lake, Emma; Cirasuolo, Michele; Wild, V.; Targett, T.; Robertson, Brant E.; Schneider, Evan; Stark, Daniel P.; Ono, Yoshiaki; Ouchi, Masami; Charlot, Stephane; Furlanetto, Steven R.

    2013-01-01

    We present the 2012 Hubble Ultra Deep Field campaign (UDF12), a large 128 orbit Cycle 19 Hubble Space Telescope program aimed at extending previous Wide Field Camera 3 (WFC3)/IR observations of the UDF by quadrupling the exposure time in the F105W filter, imaging in an additional F140W filter, and extending the F160W exposure time by 50%, as well as adding an extremely deep parallel field with the Advanced Camera for Surveys (ACS) in the F814W filter with a total exposure time of 128 orbits. The principal scientific goal of this project is to determine whether galaxies reionized the universe; our observations are designed to provide a robust determination of the star formation density at z ∼> 8, improve measurements of the ultraviolet continuum slope at z ∼ 7-8, facilitate the construction of new samples of z ∼ 9-10 candidates, and enable the detection of sources up to z ∼ 12. For this project we committed to combining these and other WFC3/IR imaging observations of the UDF area into a single homogeneous dataset to provide the deepest near-infrared observations of the sky. In this paper we present the observational overview of the project and describe the procedures used in reducing the data as well as the final products that were produced. We present the details of several special procedures that we implemented to correct calibration issues in the data for both the WFC3/IR observations of the main UDF field and our deep 128 orbit ACS/WFC F814W parallel field image, including treatment for persistence, correction for time-variable sky backgrounds, and astrometric alignment to an accuracy of a few milliarcseconds. We release the full, combined mosaics comprising a single, unified set of mosaics of the UDF, providing the deepest near-infrared blank-field view of the universe currently achievable, reaching magnitudes as deep as AB ∼ 30 mag in the near-infrared, and yielding a legacy dataset on this field

  5. Deep learning for image classification

    Science.gov (United States)

    McCoppin, Ryan; Rizki, Mateen

    2014-06-01

    This paper provides an overview of deep learning and introduces the several subfields of deep learning including a specific tutorial of convolutional neural networks. Traditional methods for learning image features are compared to deep learning techniques. In addition, we present our preliminary classification results, our basic implementation of a convolutional restricted Boltzmann machine on the Mixed National Institute of Standards and Technology database (MNIST), and we explain how to use deep learning networks to assist in our development of a robust gender classification system.

  6. Overview of deep learning in medical imaging.

    Science.gov (United States)

    Suzuki, Kenji

    2017-09-01

    The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a

  7. Hello World Deep Learning in Medical Imaging.

    Science.gov (United States)

    Lakhani, Paras; Gray, Daniel L; Pett, Carl R; Nagy, Paul; Shih, George

    2018-05-03

    There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.

  8. Ultra Deep Wave Equation Imaging and Illumination

    Energy Technology Data Exchange (ETDEWEB)

    Alexander M. Popovici; Sergey Fomel; Paul Sava; Sean Crawley; Yining Li; Cristian Lupascu

    2006-09-30

    In this project we developed and tested a novel technology, designed to enhance seismic resolution and imaging of ultra-deep complex geologic structures by using state-of-the-art wave-equation depth migration and wave-equation velocity model building technology for deeper data penetration and recovery, steeper dip and ultra-deep structure imaging, accurate velocity estimation for imaging and pore pressure prediction and accurate illumination and amplitude processing for extending the AVO prediction window. Ultra-deep wave-equation imaging provides greater resolution and accuracy under complex geologic structures where energy multipathing occurs, than what can be accomplished today with standard imaging technology. The objective of the research effort was to examine the feasibility of imaging ultra-deep structures onshore and offshore, by using (1) wave-equation migration, (2) angle-gathers velocity model building, and (3) wave-equation illumination and amplitude compensation. The effort consisted of answering critical technical questions that determine the feasibility of the proposed methodology, testing the theory on synthetic data, and finally applying the technology for imaging ultra-deep real data. Some of the questions answered by this research addressed: (1) the handling of true amplitudes in the downward continuation and imaging algorithm and the preservation of the amplitude with offset or amplitude with angle information required for AVO studies, (2) the effect of several imaging conditions on amplitudes, (3) non-elastic attenuation and approaches for recovering the amplitude and frequency, (4) the effect of aperture and illumination on imaging steep dips and on discriminating the velocities in the ultra-deep structures. All these effects were incorporated in the final imaging step of a real data set acquired specifically to address ultra-deep imaging issues, with large offsets (12,500 m) and long recording time (20 s).

  9. Image Captioning with Deep Bidirectional LSTMs

    OpenAIRE

    Wang, Cheng; Yang, Haojin; Bartz, Christian; Meinel, Christoph

    2016-01-01

    This work presents an end-to-end trainable deep bidirectional LSTM (Long-Short Term Memory) model for image captioning. Our model builds on a deep convolutional neural network (CNN) and two separate LSTM networks. It is capable of learning long term visual-language interactions by making use of history and future context information at high level semantic space. Two novel deep bidirectional variant models, in which we increase the depth of nonlinearity transition in different way, are propose...

  10. Computational ghost imaging using deep learning

    Science.gov (United States)

    Shimobaba, Tomoyoshi; Endo, Yutaka; Nishitsuji, Takashi; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Shiraki, Atsushi; Ito, Tomoyoshi

    2018-04-01

    Computational ghost imaging (CGI) is a single-pixel imaging technique that exploits the correlation between known random patterns and the measured intensity of light transmitted (or reflected) by an object. Although CGI can obtain two- or three-dimensional images with a single or a few bucket detectors, the quality of the reconstructed images is reduced by noise due to the reconstruction of images from random patterns. In this study, we improve the quality of CGI images using deep learning. A deep neural network is used to automatically learn the features of noise-contaminated CGI images. After training, the network is able to predict low-noise images from new noise-contaminated CGI images.

  11. The HST/ACS Coma Cluster Survey : VI. Colour gradients in giant and dwarf early-type galaxies

    NARCIS (Netherlands)

    den Brok, M.; Peletier, R. F.; Valentijn, E. A.; Balcells, Marc; Carter, D.; Erwin, P.; Ferguson, H. C.; Goudfrooij, P.; Graham, A. W.; Hammer, D.; Lucey, J. R.; Trentham, N.; Guzman, R.; Hoyos, C.; Kleijn, G. Verdoes; Jogee, S.; Karick, A. M.; Marinova, I.; Mouhcine, M.; Weinzirl, T.

    Using deep, high-spatial-resolution imaging from the Hubble Space Telescope/Advanced Camera for Surveys (HST/ACS) Coma Cluster Treasury Survey, we determine colour profiles of early-type galaxies in the Coma cluster. From 176 galaxies brighter than M-F814W(AB) = -15 mag that are either

  12. The HST/ACS Coma Cluster Survey : II. Data Description and Source Catalogs

    NARCIS (Netherlands)

    Hammer, Derek; Kleijn, Gijs Verdoes; Hoyos, Carlos; den Brok, Mark; Balcells, Marc; Ferguson, Henry C.; Goudfrooij, Paul; Carter, David; Guzman, Rafael; Peletier, Reynier F.; Smith, Russell J.; Graham, Alister W.; Trentham, Neil; Peng, Eric; Puzia, Thomas H.; Lucey, John R.; Jogee, Shardha; Aguerri, Alfonso L.; Batcheldor, Dan; Bridges, Terry J.; Chiboucas, Kristin; Davies, Jonathan I.; del Burgo, Carlos; Erwin, Peter; Hornschemeier, Ann; Hudson, Michael J.; Huxor, Avon; Jenkins, Leigh; Karick, Arna; Khosroshahi, Habib; Kourkchi, Ehsan; Komiyama, Yutaka; Lotz, Jennifer; Marzke, Ronald O.; Marinova, Irina; Matkovic, Ana; Merritt, David; Miller, Bryan W.; Miller, Neal A.; Mobasher, Bahram; Mouhcine, Mustapha; Okamura, Sadanori; Percival, Sue; Phillipps, Steven; Poggianti, Bianca M.; Price, James; Sharples, Ray M.; Tully, R. Brent; Valentijn, Edwin

    The Coma cluster, Abell 1656, was the target of an HST-ACS Treasury program designed for deep imaging in the F475W and F814W passbands. Although our survey was interrupted by the ACS instrument failure in early 2007, the partially completed survey still covers ~50% of the core high-density region in

  13. Deep learning for SAR image formation

    Science.gov (United States)

    Mason, Eric; Yonel, Bariscan; Yazici, Birsen

    2017-04-01

    The recent success of deep learning has lead to growing interest in applying these methods to signal processing problems. This paper explores the applications of deep learning to synthetic aperture radar (SAR) image formation. We review deep learning from a perspective relevant to SAR image formation. Our objective is to address SAR image formation in the presence of uncertainties in the SAR forward model. We present a recurrent auto-encoder network architecture based on the iterative shrinkage thresholding algorithm (ISTA) that incorporates SAR modeling. We then present an off-line training method using stochastic gradient descent and discuss the challenges and key steps of learning. Lastly, we show experimentally that our method can be used to form focused images in the presence of phase uncertainties. We demonstrate that the resulting algorithm has faster convergence and decreased reconstruction error than that of ISTA.

  14. Deep Learning in Medical Image Analysis.

    Science.gov (United States)

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2017-06-21

    This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.

  15. Deep Sky Imaging: Workflow 2

    Science.gov (United States)

    Schedler, Johannes

    As astrophotographers we are living in a golden age. In recent years CCD technology and the quality of amateur telescopes have reached a level of perfection, giving amateurs the chance to produce images rivaling those taken from mountaintops by large professional systems as recently as two decades ago. However hardware and good imaging location is only a part of the game. A high level of skill with image processing can offer amateurs an edge and provide a chance to compensate for the limited aperture of our telescopes.

  16. Image quality assessment using deep convolutional networks

    Science.gov (United States)

    Li, Yezhou; Ye, Xiang; Li, Yong

    2017-12-01

    This paper proposes a method of accurately assessing image quality without a reference image by using a deep convolutional neural network. Existing training based methods usually utilize a compact set of linear filters for learning features of images captured by different sensors to assess their quality. These methods may not be able to learn the semantic features that are intimately related with the features used in human subject assessment. Observing this drawback, this work proposes training a deep convolutional neural network (CNN) with labelled images for image quality assessment. The ReLU in the CNN allows non-linear transformations for extracting high-level image features, providing a more reliable assessment of image quality than linear filters. To enable the neural network to take images of any arbitrary size as input, the spatial pyramid pooling (SPP) is introduced connecting the top convolutional layer and the fully-connected layer. In addition, the SPP makes the CNN robust to object deformations to a certain extent. The proposed method taking an image as input carries out an end-to-end learning process, and outputs the quality of the image. It is tested on public datasets. Experimental results show that it outperforms existing methods by a large margin and can accurately assess the image quality on images taken by different sensors of varying sizes.

  17. Deep learning in medical imaging: General overview

    Energy Technology Data Exchange (ETDEWEB)

    Lee, June Goo; Jun, Sang Hoon; Cho, Young Won; Lee, Hyun Na; KIm, Guk Bae; Seo, Joon Beom; Kim, Nam Kug [University of Ulsan College of Medicine, Asan Medical Center, Seoul (Korea, Republic of)

    2017-08-01

    The artificial neural network (ANN)–a machine learning technique inspired by the human neuronal synapse system–was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and health care, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging.

  18. Deep learning in medical imaging: General overview

    International Nuclear Information System (INIS)

    Lee, June Goo; Jun, Sang Hoon; Cho, Young Won; Lee, Hyun Na; KIm, Guk Bae; Seo, Joon Beom; Kim, Nam Kug

    2017-01-01

    The artificial neural network (ANN)–a machine learning technique inspired by the human neuronal synapse system–was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and health care, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging

  19. Deep Learning in Medical Imaging: General Overview

    Science.gov (United States)

    Lee, June-Goo; Jun, Sanghoon; Cho, Young-Won; Lee, Hyunna; Kim, Guk Bae

    2017-01-01

    The artificial neural network (ANN)–a machine learning technique inspired by the human neuronal synapse system–was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and healthcare, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging. PMID:28670152

  20. Deep Learning in Medical Imaging: General Overview.

    Science.gov (United States)

    Lee, June-Goo; Jun, Sanghoon; Cho, Young-Won; Lee, Hyunna; Kim, Guk Bae; Seo, Joon Beom; Kim, Namkug

    2017-01-01

    The artificial neural network (ANN)-a machine learning technique inspired by the human neuronal synapse system-was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and healthcare, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging.

  1. Jet-imagesdeep learning edition

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Luke de [Institute for Computational and Mathematical Engineering, Stanford University,Huang Building 475 Via Ortega, Stanford, CA 94305 (United States); Kagan, Michael [SLAC National Accelerator Laboratory, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States); Mackey, Lester [Department of Statistics, Stanford University,390 Serra Mall, Stanford, CA 94305 (United States); Nachman, Benjamin; Schwartzman, Ariel [SLAC National Accelerator Laboratory, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States)

    2016-07-13

    Building on the notion of a particle physics detector as a camera and the collimated streams of high energy particles, or jets, it measures as an image, we investigate the potential of machine learning techniques based on deep learning architectures to identify highly boosted W bosons. Modern deep learning algorithms trained on jet images can out-perform standard physically-motivated feature driven approaches to jet tagging. We develop techniques for visualizing how these features are learned by the network and what additional information is used to improve performance. This interplay between physically-motivated feature driven tools and supervised learning algorithms is general and can be used to significantly increase the sensitivity to discover new particles and new forces, and gain a deeper understanding of the physics within jets.

  2. Jet-imagesdeep learning edition

    International Nuclear Information System (INIS)

    Oliveira, Luke de; Kagan, Michael; Mackey, Lester; Nachman, Benjamin; Schwartzman, Ariel

    2016-01-01

    Building on the notion of a particle physics detector as a camera and the collimated streams of high energy particles, or jets, it measures as an image, we investigate the potential of machine learning techniques based on deep learning architectures to identify highly boosted W bosons. Modern deep learning algorithms trained on jet images can out-perform standard physically-motivated feature driven approaches to jet tagging. We develop techniques for visualizing how these features are learned by the network and what additional information is used to improve performance. This interplay between physically-motivated feature driven tools and supervised learning algorithms is general and can be used to significantly increase the sensitivity to discover new particles and new forces, and gain a deeper understanding of the physics within jets.

  3. Photoacoustic image reconstruction via deep learning

    Science.gov (United States)

    Antholzer, Stephan; Haltmeier, Markus; Nuster, Robert; Schwab, Johannes

    2018-02-01

    Applying standard algorithms to sparse data problems in photoacoustic tomography (PAT) yields low-quality images containing severe under-sampling artifacts. To some extent, these artifacts can be reduced by iterative image reconstruction algorithms which allow to include prior knowledge such as smoothness, total variation (TV) or sparsity constraints. These algorithms tend to be time consuming as the forward and adjoint problems have to be solved repeatedly. Further, iterative algorithms have additional drawbacks. For example, the reconstruction quality strongly depends on a-priori model assumptions about the objects to be recovered, which are often not strictly satisfied in practical applications. To overcome these issues, in this paper, we develop direct and efficient reconstruction algorithms based on deep learning. As opposed to iterative algorithms, we apply a convolutional neural network, whose parameters are trained before the reconstruction process based on a set of training data. For actual image reconstruction, a single evaluation of the trained network yields the desired result. Our presented numerical results (using two different network architectures) demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative reconstruction methods.

  4. Deep image mining for diabetic retinopathy screening.

    Science.gov (United States)

    Quellec, Gwenolé; Charrière, Katia; Boudi, Yassine; Cochener, Béatrice; Lamard, Mathieu

    2017-07-01

    Deep learning is quickly becoming the leading methodology for medical image analysis. Given a large medical archive, where each image is associated with a diagnosis, efficient pathology detectors or classifiers can be trained with virtually no expert knowledge about the target pathologies. However, deep learning algorithms, including the popular ConvNets, are black boxes: little is known about the local patterns analyzed by ConvNets to make a decision at the image level. A solution is proposed in this paper to create heatmaps showing which pixels in images play a role in the image-level predictions. In other words, a ConvNet trained for image-level classification can be used to detect lesions as well. A generalization of the backpropagation method is proposed in order to train ConvNets that produce high-quality heatmaps. The proposed solution is applied to diabetic retinopathy (DR) screening in a dataset of almost 90,000 fundus photographs from the 2015 Kaggle Diabetic Retinopathy competition and a private dataset of almost 110,000 photographs (e-ophtha). For the task of detecting referable DR, very good detection performance was achieved: A z =0.954 in Kaggle's dataset and A z =0.949 in e-ophtha. Performance was also evaluated at the image level and at the lesion level in the DiaretDB1 dataset, where four types of lesions are manually segmented: microaneurysms, hemorrhages, exudates and cotton-wool spots. For the task of detecting images containing these four lesion types, the proposed detector, which was trained to detect referable DR, outperforms recent algorithms trained to detect those lesions specifically, with pixel-level supervision. At the lesion level, the proposed detector outperforms heatmap generation algorithms for ConvNets. This detector is part of the Messidor® system for mobile eye pathology screening. Because it does not rely on expert knowledge or manual segmentation for detecting relevant patterns, the proposed solution is a promising image

  5. An Ensemble of Deep Support Vector Machines for Image Categorization

    NARCIS (Netherlands)

    Abdullah, Azizi; Veltkamp, Remco C.; Wiering, Marco

    2009-01-01

    This paper presents the deep support vector machine (D-SVM) inspired by the increasing popularity of deep belief networks for image recognition. Our deep SVM trains an SVM in the standard way and then uses the kernel activations of support vectors as inputs for training another SVM at the next

  6. A Survey on Deep Learning in Medical Image Analysis

    NARCIS (Netherlands)

    Litjens, G.J.; Kooi, T.; Ehteshami Bejnordi, B.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Laak, J.A.W.M. van der; Ginneken, B. van; Sanchez, C.I.

    2017-01-01

    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared

  7. DeepInfer: open-source deep learning deployment toolkit for image-guided therapy

    Science.gov (United States)

    Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang

    2017-03-01

    Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research work ows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.

  8. Deep kernel learning method for SAR image target recognition

    Science.gov (United States)

    Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao

    2017-10-01

    With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.

  9. Aesthetics and Composition in Deep Sky Imaging

    Science.gov (United States)

    Gendler, Robert

    It's safe to say that many of us began astrophotography feeling overwhelmed by the unnerving task of creating even the simplest astro image. Typically those first successful images were met with a healthy dose of humility as we began to understand the reality of assembling an aesthetically pleasing astronomical image. As we acquired more experience and gradually mastered the fundamentals of image processing our goals and objectives likely evolved and matured.

  10. Recent Hubble Space Telescope Imaging of the Light Echoes of Supernova 2014J in M 82 and Supernova 2016adj in Centaurus A

    Science.gov (United States)

    Lawrence, Stephen S.; Hyder, Ali; Sugerman, Ben; Crotts, Arlin P. S.

    2017-06-01

    We report on our ongoing use of Hubble Space Telescope (HST) imaging to monitor the scattered light echoes of recent heavily-extincted supernovae in two nearby, albeit unusual, galaxies.Supernova 2014J was a highly-reddened Type Ia supernova that erupted in the nearby irregular star-forming galaxy M 82 in 2014 January. It was discovered to have light echo by Crotts (2016) in early epoch HST imaging and has been further described by Yang, et al. (2017) based on HST imaging through late 2014. Our ongoing monitoring in the WFC3 F438W, F555W, and F814W filters shows that, consistent with Crotts (2106) and Yang, et al. (2017), throughout 2015 and 2016 the main light echo arc expanded through a dust complex located approximately 230 pc in the foreground of the supernova. This main light echo has, however, faded dramatically in our most recent HST imaging from 2017 March. The supernova itself has also faded to undetectable levels by 2017 March.Supernova 2016adj is a highly-reddened core-collapse supernova that erupted inside the unusual dust lane of the nearby giant elliptical galaxy Centaurus A (NGC 5128) in 2016 February. It was discovered to have a light echo by Sugerman & Lawrence (2016) in early epoch HST imaging in 2016 April. Our ongoing monitoring in the WFC3 F438W, F547M, and F814W filters shows a slightly elliptical series of light echo arc segments hosted by a tilted dust complex ranging approximately 150--225 pc in the foreground of the supernova. The supernova itself has also faded to undetectable levels by 2017 April.References: Crotts, A. P. S., ApJL, 804, L37 (2016); Yang et al., ApJ, 834, 60 (2017); Sugerman, B. and Lawrence, S., ATel #8890 (2016).

  11. VizieR Online Data Catalog: SG1120-1202 members HST imaging & 24um fluxes (Monroe+, 2017)

    Science.gov (United States)

    Monroe, J. T.; Tran, K.-V. H.; Gonzalez, A. H.

    2017-09-01

    We employ HST imaging of an ~8'x12' mosaic across three filters: F390W (WFC3/UVIS), F606W (ACS/WFC), and F814W (ACS/WFC) for a total of 44 pointings (combined primary and parallels) during cycles 14 (GO 10499) and 19 (GO 12470). We use the Spitzer MIPS 24um fluxes from Saintonge+ (2008ApJ...685L.113S) and Tran+ (2009ApJ...705..809T). The 24um observations were retrieved from the Spitzer archive. For details on spectroscopy from multi-band ground-based observations using Magellan (in 2006), MMT, and VLT/VIMOS (in 2003), we refer the reader to Tran+ (2009ApJ...705..809T). (1 data file).

  12. Imaging findings and significance of deep neck space infection

    International Nuclear Information System (INIS)

    Zhuang Qixin; Gu Yifeng; Du Lianjun; Zhu Lili; Pan Yuping; Li Minghua; Yang Shixun; Shang Kezhong; Yin Shankai

    2004-01-01

    Objective: To study the imaging appearance of deep neck space cellulitis and abscess and to evaluate the diagnostic criteria of deep neck space infection. Methods: CT and MRI findings of 28 cases with deep neck space infection proved by clinical manifestation and pathology were analyzed, including 11 cases of retropharyngeal space, 5 cases of parapharyngeal space infection, 4 cases of masticator space infection, and 8 cases of multi-space infection. Results: CT and MRI could display the swelling of the soft tissues and displacement, reduction, or disappearance of lipoid space in the cellulitis. In inflammatory tissues, MRI imaging demonstrated hypointense or isointense signal on T 1 WI, and hyperintense signal changes on T 2 WI. In abscess, CT could display hypodensity in the center and boundary enhancement of the abscess. MRI could display obvious hyperintense signal on T 2 WI and boundary enhancement. Conclusion: CT and MRI could provide useful information for deep neck space cellulitis and abscess

  13. Distributed deep learning networks among institutions for medical imaging.

    Science.gov (United States)

    Chang, Ken; Balachandar, Niranjan; Lam, Carson; Yi, Darvin; Brown, James; Beers, Andrew; Rosen, Bruce; Rubin, Daniel L; Kalpathy-Cramer, Jayashree

    2018-03-29

    Deep learning has become a promising approach for automated support for clinical diagnosis. When medical data samples are limited, collaboration among multiple institutions is necessary to achieve high algorithm performance. However, sharing patient data often has limitations due to technical, legal, or ethical concerns. In this study, we propose methods of distributing deep learning models as an attractive alternative to sharing patient data. We simulate the distribution of deep learning models across 4 institutions using various training heuristics and compare the results with a deep learning model trained on centrally hosted patient data. The training heuristics investigated include ensembling single institution models, single weight transfer, and cyclical weight transfer. We evaluated these approaches for image classification in 3 independent image collections (retinal fundus photos, mammography, and ImageNet). We find that cyclical weight transfer resulted in a performance that was comparable to that of centrally hosted patient data. We also found that there is an improvement in the performance of cyclical weight transfer heuristic with a high frequency of weight transfer. We show that distributing deep learning models is an effective alternative to sharing patient data. This finding has implications for any collaborative deep learning study.

  14. A survey on deep learning in medical image analysis.

    Science.gov (United States)

    Litjens, Geert; Kooi, Thijs; Bejnordi, Babak Ehteshami; Setio, Arnaud Arindra Adiyoso; Ciompi, Francesco; Ghafoorian, Mohsen; van der Laak, Jeroen A W M; van Ginneken, Bram; Sánchez, Clara I

    2017-12-01

    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskeletal. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Research on simulated infrared image utility evaluation using deep representation

    Science.gov (United States)

    Zhang, Ruiheng; Mu, Chengpo; Yang, Yu; Xu, Lixin

    2018-01-01

    Infrared (IR) image simulation is an important data source for various target recognition systems. However, whether simulated IR images could be used as training data for classifiers depends on the features of fidelity and authenticity of simulated IR images. For evaluation of IR image features, a deep-representation-based algorithm is proposed. Being different from conventional methods, which usually adopt a priori knowledge or manually designed feature, the proposed method can extract essential features and quantitatively evaluate the utility of simulated IR images. First, for data preparation, we employ our IR image simulation system to generate large amounts of IR images. Then, we present the evaluation model of simulated IR image, for which an end-to-end IR feature extraction and target detection model based on deep convolutional neural network is designed. At last, the experiments illustrate that our proposed method outperforms other verification algorithms in evaluating simulated IR images. Cross-validation, variable proportion mixed data validation, and simulation process contrast experiments are carried out to evaluate the utility and objectivity of the images generated by our simulation system. The optimum mixing ratio between simulated and real data is 0.2≤γ≤0.3, which is an effective data augmentation method for real IR images.

  16. Searching for prostate cancer by fully automated magnetic resonance imaging classification: deep learning versus non-deep learning.

    Science.gov (United States)

    Wang, Xinggang; Yang, Wei; Weinreb, Jeffrey; Han, Juan; Li, Qiubai; Kong, Xiangchuang; Yan, Yongluan; Ke, Zan; Luo, Bo; Liu, Tao; Wang, Liang

    2017-11-13

    Prostate cancer (PCa) is a major cause of death since ancient time documented in Egyptian Ptolemaic mummy imaging. PCa detection is critical to personalized medicine and varies considerably under an MRI scan. 172 patients with 2,602 morphologic images (axial 2D T2-weighted imaging) of the prostate were obtained. A deep learning with deep convolutional neural network (DCNN) and a non-deep learning with SIFT image feature and bag-of-word (BoW), a representative method for image recognition and analysis, were used to distinguish pathologically confirmed PCa patients from prostate benign conditions (BCs) patients with prostatitis or prostate benign hyperplasia (BPH). In fully automated detection of PCa patients, deep learning had a statistically higher area under the receiver operating characteristics curve (AUC) than non-deep learning (P = 0.0007 deep learning method and 0.70 (95% CI 0.63-0.77) for non-deep learning method, respectively. Our results suggest that deep learning with DCNN is superior to non-deep learning with SIFT image feature and BoW model for fully automated PCa patients differentiation from prostate BCs patients. Our deep learning method is extensible to image modalities such as MR imaging, CT and PET of other organs.

  17. Deep Corals, Deep Learning: Moving the Deep Net Towards Real-Time Image Annotation

    OpenAIRE

    Lea-Anne Henry; Sankha S. Mukherjee; Neil M. Roberston; Laurence De Clippele; J. Murray Roberts

    2016-01-01

    The mismatch between human capacity and the acquisition of Big Data such as Earth imagery undermines commitments to Convention on Biological Diversity (CBD) and Aichi targets. Artificial intelligence (AI) solutions to Big Data issues are urgently needed as these could prove to be faster, more accurate, and cheaper. Reducing costs of managing protected areas in remote deep waters and in the High Seas is of great importance, and this is a realm where autonomous technology will be transformative.

  18. Color image definition evaluation method based on deep learning method

    Science.gov (United States)

    Liu, Di; Li, YingChun

    2018-01-01

    In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.

  19. Down image recognition based on deep convolutional neural network

    Directory of Open Access Journals (Sweden)

    Wenzhu Yang

    2018-06-01

    Full Text Available Since of the scale and the various shapes of down in the image, it is difficult for traditional image recognition method to correctly recognize the type of down image and get the required recognition accuracy, even for the Traditional Convolutional Neural Network (TCNN. To deal with the above problems, a Deep Convolutional Neural Network (DCNN for down image classification is constructed, and a new weight initialization method is proposed. Firstly, the salient regions of a down image were cut from the image using the visual saliency model. Then, these salient regions of the image were used to train a sparse autoencoder and get a collection of convolutional filters, which accord with the statistical characteristics of dataset. At last, a DCNN with Inception module and its variants was constructed. To improve the recognition accuracy, the depth of the network is deepened. The experiment results indicate that the constructed DCNN increases the recognition accuracy by 2.7% compared to TCNN, when recognizing the down in the images. The convergence rate of the proposed DCNN with the new weight initialization method is improved by 25.5% compared to TCNN. Keywords: Deep convolutional neural network, Weight initialization, Sparse autoencoder, Visual saliency model, Image recognition

  20. Deep radio synthesis images of globular clusters

    International Nuclear Information System (INIS)

    Kulkarni, S.R.; Goss, W.M.; Wolszczan, A.; Middleditch, J.

    1990-01-01

    Results are reported from a program of high-resolution and high-sensitivity imaging of globular clusters at 20 cm. The findings indicate that there is not a large number of pulsars in compact binaries which have escaped detection in single-dish pulse searches. Such binaries have been postulated to result from tidal captures of single main-sequence stars. It is suggested that most tidal captures involving neutron stars ultimately result in the formation of a spun-up single pulsar and the complete disruption of the main-sequence star. 27 refs

  1. Efficient generation of image chips for training deep learning algorithms

    Science.gov (United States)

    Han, Sanghui; Fafard, Alex; Kerekes, John; Gartley, Michael; Ientilucci, Emmett; Savakis, Andreas; Law, Charles; Parhan, Jason; Turek, Matt; Fieldhouse, Keith; Rovito, Todd

    2017-05-01

    Training deep convolutional networks for satellite or aerial image analysis often requires a large amount of training data. For a more robust algorithm, training data need to have variations not only in the background and target, but also radiometric variations in the image such as shadowing, illumination changes, atmospheric conditions, and imaging platforms with different collection geometry. Data augmentation is a commonly used approach to generating additional training data. However, this approach is often insufficient in accounting for real world changes in lighting, location or viewpoint outside of the collection geometry. Alternatively, image simulation can be an efficient way to augment training data that incorporates all these variations, such as changing backgrounds, that may be encountered in real data. The Digital Imaging and Remote Sensing Image Image Generation (DIRSIG) model is a tool that produces synthetic imagery using a suite of physics-based radiation propagation modules. DIRSIG can simulate images taken from different sensors with variation in collection geometry, spectral response, solar elevation and angle, atmospheric models, target, and background. Simulation of Urban Mobility (SUMO) is a multi-modal traffic simulation tool that explicitly models vehicles that move through a given road network. The output of the SUMO model was incorporated into DIRSIG to generate scenes with moving vehicles. The same approach was used when using helicopters as targets, but with slight modifications. Using the combination of DIRSIG and SUMO, we quickly generated many small images, with the target at the center with different backgrounds. The simulations generated images with vehicles and helicopters as targets, and corresponding images without targets. Using parallel computing, 120,000 training images were generated in about an hour. Some preliminary results show an improvement in the deep learning algorithm when real image training data are augmented with

  2. Study of CT image texture using deep learning techniques

    Science.gov (United States)

    Dutta, Sandeep; Fan, Jiahua; Chevalier, David

    2018-03-01

    For CT imaging, reduction of radiation dose while improving or maintaining image quality (IQ) is currently a very active research and development topic. Iterative Reconstruction (IR) approaches have been suggested to be able to offer better IQ to dose ratio compared to the conventional Filtered Back Projection (FBP) reconstruction. However, it has been widely reported that often CT image texture from IR is different compared to that from FBP. Researchers have proposed different figure of metrics to quantitate the texture from different reconstruction methods. But there is still a lack of practical and robust method in the field for texture description. This work applied deep learning method for CT image texture study. Multiple dose scans of a 20cm diameter cylindrical water phantom was performed on Revolution CT scanner (GE Healthcare, Waukesha) and the images were reconstructed with FBP and four different IR reconstruction settings. The training images generated were randomly allotted (80:20) to a training and validation set. An independent test set of 256-512 images/class were collected with the same scan and reconstruction settings. Multiple deep learning (DL) networks with Convolution, RELU activation, max-pooling, fully-connected, global average pooling and softmax activation layers were investigated. Impact of different image patch size for training was investigated. Original pixel data as well as normalized image data were evaluated. DL models were reliably able to classify CT image texture with accuracy up to 99%. Results show that the deep learning techniques suggest that CT IR techniques may help lower the radiation dose compared to FBP.

  3. Improving face image extraction by using deep learning technique

    Science.gov (United States)

    Xue, Zhiyun; Antani, Sameer; Long, L. R.; Demner-Fushman, Dina; Thoma, George R.

    2016-03-01

    The National Library of Medicine (NLM) has made a collection of over a 1.2 million research articles containing 3.2 million figure images searchable using the Open-iSM multimodal (text+image) search engine. Many images are visible light photographs, some of which are images containing faces ("face images"). Some of these face images are acquired in unconstrained settings, while others are studio photos. To extract the face regions in the images, we first applied one of the most widely-used face detectors, a pre-trained Viola-Jones detector implemented in Matlab and OpenCV. The Viola-Jones detector was trained for unconstrained face image detection, but the results for the NLM database included many false positives, which resulted in a very low precision. To improve this performance, we applied a deep learning technique, which reduced the number of false positives and as a result, the detection precision was improved significantly. (For example, the classification accuracy for identifying whether the face regions output by this Viola- Jones detector are true positives or not in a test set is about 96%.) By combining these two techniques (Viola-Jones and deep learning) we were able to increase the system precision considerably, while avoiding the need to manually construct a large training set by manual delineation of the face regions.

  4. Assessing microscope image focus quality with deep learning.

    Science.gov (United States)

    Yang, Samuel J; Berndl, Marc; Michael Ando, D; Barch, Mariya; Narayanaswamy, Arunachalam; Christiansen, Eric; Hoyer, Stephan; Roat, Chris; Hung, Jane; Rueden, Curtis T; Shankar, Asim; Finkbeiner, Steven; Nelson, Philip

    2018-03-15

    Large image datasets acquired on automated microscopes typically have some fraction of low quality, out-of-focus images, despite the use of hardware autofocus systems. Identification of these images using automated image analysis with high accuracy is important for obtaining a clean, unbiased image dataset. Complicating this task is the fact that image focus quality is only well-defined in foreground regions of images, and as a result, most previous approaches only enable a computation of the relative difference in quality between two or more images, rather than an absolute measure of quality. We present a deep neural network model capable of predicting an absolute measure of image focus on a single image in isolation, without any user-specified parameters. The model operates at the image-patch level, and also outputs a measure of prediction certainty, enabling interpretable predictions. The model was trained on only 384 in-focus Hoechst (nuclei) stain images of U2OS cells, which were synthetically defocused to one of 11 absolute defocus levels during training. The trained model can generalize on previously unseen real Hoechst stain images, identifying the absolute image focus to within one defocus level (approximately 3 pixel blur diameter difference) with 95% accuracy. On a simpler binary in/out-of-focus classification task, the trained model outperforms previous approaches on both Hoechst and Phalloidin (actin) stain images (F-scores of 0.89 and 0.86, respectively over 0.84 and 0.83), despite only having been presented Hoechst stain images during training. Lastly, we observe qualitatively that the model generalizes to two additional stains, Hoechst and Tubulin, of an unseen cell type (Human MCF-7) acquired on a different instrument. Our deep neural network enables classification of out-of-focus microscope images with both higher accuracy and greater precision than previous approaches via interpretable patch-level focus and certainty predictions. The use of

  5. Deep learning for tumor classification in imaging mass spectrometry.

    Science.gov (United States)

    Behrmann, Jens; Etmann, Christian; Boskamp, Tobias; Casadonte, Rita; Kriegsmann, Jörg; Maaß, Peter

    2018-04-01

    Tumor classification using imaging mass spectrometry (IMS) data has a high potential for future applications in pathology. Due to the complexity and size of the data, automated feature extraction and classification steps are required to fully process the data. Since mass spectra exhibit certain structural similarities to image data, deep learning may offer a promising strategy for classification of IMS data as it has been successfully applied to image classification. Methodologically, we propose an adapted architecture based on deep convolutional networks to handle the characteristics of mass spectrometry data, as well as a strategy to interpret the learned model in the spectral domain based on a sensitivity analysis. The proposed methods are evaluated on two algorithmically challenging tumor classification tasks and compared to a baseline approach. Competitiveness of the proposed methods is shown on both tasks by studying the performance via cross-validation. Moreover, the learned models are analyzed by the proposed sensitivity analysis revealing biologically plausible effects as well as confounding factors of the considered tasks. Thus, this study may serve as a starting point for further development of deep learning approaches in IMS classification tasks. https://gitlab.informatik.uni-bremen.de/digipath/Deep_Learning_for_Tumor_Classification_in_IMS. jbehrmann@uni-bremen.de or christianetmann@uni-bremen.de. Supplementary data are available at Bioinformatics online.

  6. Opto-ultrasound imaging in vivo in deep tissue

    International Nuclear Information System (INIS)

    Si, Ke; YanXu; Zheng, Yao; Zhu, Xinpei; Gong, Wei

    2016-01-01

    It is of keen importance of deep tissue imaging with high resolution in vivo. Here we present an opto-ultrasound imaging method which utilizes an ultrasound to confine the laser pulse in a very tiny spot as a guide star. The results show that the imaging depth is 2mm with a resolution of 10um. Meanwhile, the excitation power we used is less than 2mW, which indicates that our methods can be applied in vivo without optical toxicity and optical bleaching due to the excitation power. (paper)

  7. Visualizing deep neural network by alternately image blurring and deblurring.

    Science.gov (United States)

    Wang, Feng; Liu, Haijun; Cheng, Jian

    2018-01-01

    Visualization from trained deep neural networks has drawn massive public attention in recent. One of the visualization approaches is to train images maximizing the activation of specific neurons. However, directly maximizing the activation would lead to unrecognizable images, which cannot provide any meaningful information. In this paper, we introduce a simple but effective technique to constrain the optimization route of the visualization. By adding two totally inverse transformations, image blurring and deblurring, to the optimization procedure, recognizable images can be created. Our algorithm is good at extracting the details in the images, which are usually filtered by previous methods in the visualizations. Extensive experiments on AlexNet, VGGNet and GoogLeNet illustrate that we can better understand the neural networks utilizing the knowledge obtained by the visualization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Magnetic resonance imaging in deep pelvic endometriosis: iconographic essay

    International Nuclear Information System (INIS)

    Coutinho Junior, Antonio Carlos; Coutinho, Elisa Pompeu Dias; Lima, Claudio Marcio Amaral de Oliveira; Ribeiro, Erica Barreiros; Aidar, Marisa Nassar; Gasparetto, Emerson Leandro

    2008-01-01

    Endometriosis is characterized by the presence of normal endometrial tissue outside the uterine cavity. In patients with deep pelvic endometriosis, uterosacral ligaments, rectum, rectovaginal septum, vagina or bladder may be involved. Clinical manifestations may be variable, including pelvic pain, dysmenorrhea, dyspareunia, urinary symptoms and infertility. Complete surgical excision is the gold standard for treating this disease, and hence the importance of the preoperative work-up that usually is limited to an evaluation of sonographic and clinical data. Magnetic resonance imaging is of paramount importance in the diagnosis of endometriosis, considering its high accuracy in the identification of lesions intermingled with adhesions, and in the determination of peritoneal lesions extent. The present pictorial review describes the main magnetic resonance imaging findings in deep pelvic endometriosis. (author)

  9. Magnetic resonance imaging in deep pelvic endometriosis: iconographic essay

    Energy Technology Data Exchange (ETDEWEB)

    Coutinho Junior, Antonio Carlos; Coutinho, Elisa Pompeu Dias; Lima, Claudio Marcio Amaral de Oliveira; Ribeiro, Erica Barreiros; Aidar, Marisa Nassar [Clinica de Diagnostico por Imagem (CDPI), Rio de Janeiro, RJ (Brazil); Clinica Multi-Imagem, Rio de Janeiro, RJ (Brazil); E-mail: cmaol@br.inter.net; Gasparetto, Emerson Leandro [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Dept. de Radiologia

    2008-03-15

    Endometriosis is characterized by the presence of normal endometrial tissue outside the uterine cavity. In patients with deep pelvic endometriosis, uterosacral ligaments, rectum, rectovaginal septum, vagina or bladder may be involved. Clinical manifestations may be variable, including pelvic pain, dysmenorrhea, dyspareunia, urinary symptoms and infertility. Complete surgical excision is the gold standard for treating this disease, and hence the importance of the preoperative work-up that usually is limited to an evaluation of sonographic and clinical data. Magnetic resonance imaging is of paramount importance in the diagnosis of endometriosis, considering its high accuracy in the identification of lesions intermingled with adhesions, and in the determination of peritoneal lesions extent. The present pictorial review describes the main magnetic resonance imaging findings in deep pelvic endometriosis. (author)

  10. Deep UV Native Fluorescence Imaging of Antarctic Cryptoendolithic Communities

    Science.gov (United States)

    Storrie-Lombardi, M. C.; Douglas, S.; Sun, H.; McDonald, G. D.; Bhartia, R.; Nealson, K. H.; Hug, W. F.

    2001-01-01

    An interdisciplinary team at the Jet Propulsion Laboratory Center for Life Detection has embarked on a project to provide in situ chemical and morphological characterization of Antarctic cryptoendolithic microbial communities. We present here in situ deep ultraviolet (UV) native fluorescence and environmental scanning electron microscopy images transiting 8.5 mm into a sandstone sample from the Antarctic Dry Valleys. The deep ultraviolet imaging system employs 224.3, 248.6, and 325 nm lasers to elicit differential fluorescence and resonance Raman responses from biomolecules and minerals. The 224.3 and 248.6 nm lasers elicit a fluorescence response from the aromatic amino and nucleic acids. Excitation at 325 nm may elicit activity from a variety of biomolecules, but is more likely to elicit mineral fluorescence. The resultant fluorescence images provide in situ chemical and morphological maps of microorganisms and the associated organic matrix. Visible broadband reflectance images provide orientation against the mineral background. Environmental scanning electron micrographs provided detailed morphological information. The technique has made possible the construction of detailed fluorescent maps extending from the surface of an Antarctic sandstone sample to a depth of 8.5 mm. The images detect no evidence of microbial life in the superficial 0.2 mm crustal layer. The black lichen component between 0.3 and 0.5 mm deep absorbs all wavelengths of both laser and broadband illumination. Filamentous deep ultraviolet native fluorescent activity dominates in the white layer between 0.6 mm and 5.0 mm from the surface. These filamentous forms are fungi that continue into the red (iron-rich) region of the sample extending from 5.0 to 8.5 mm. Using differential image subtraction techniques it is possible to identify fungal nuclei. The ultraviolet response is markedly attenuated in this region, apparently from the absorption of ultraviolet light by iron-rich particles coating

  11. Deep Joint Rain Detection and Removal from a Single Image

    OpenAIRE

    Yang, Wenhan; Tan, Robby T.; Feng, Jiashi; Liu, Jiaying; Guo, Zongming; Yan, Shuicheng

    2016-01-01

    In this paper, we address a rain removal problem from a single image, even in the presence of heavy rain and rain streak accumulation. Our core ideas lie in the new rain image models and a novel deep learning architecture. We first modify an existing model comprising a rain streak layer and a background layer, by adding a binary map that locates rain streak regions. Second, we create a new model consisting of a component representing rain streak accumulation (where individual streaks cannot b...

  12. Deep Learning MR Imaging-based Attenuation Correction for PET/MR Imaging.

    Science.gov (United States)

    Liu, Fang; Jang, Hyungseok; Kijowski, Richard; Bradshaw, Tyler; McMillan, Alan B

    2018-02-01

    Purpose To develop and evaluate the feasibility of deep learning approaches for magnetic resonance (MR) imaging-based attenuation correction (AC) (termed deep MRAC) in brain positron emission tomography (PET)/MR imaging. Materials and Methods A PET/MR imaging AC pipeline was built by using a deep learning approach to generate pseudo computed tomographic (CT) scans from MR images. A deep convolutional auto-encoder network was trained to identify air, bone, and soft tissue in volumetric head MR images coregistered to CT data for training. A set of 30 retrospective three-dimensional T1-weighted head images was used to train the model, which was then evaluated in 10 patients by comparing the generated pseudo CT scan to an acquired CT scan. A prospective study was carried out for utilizing simultaneous PET/MR imaging for five subjects by using the proposed approach. Analysis of covariance and paired-sample t tests were used for statistical analysis to compare PET reconstruction error with deep MRAC and two existing MR imaging-based AC approaches with CT-based AC. Results Deep MRAC provides an accurate pseudo CT scan with a mean Dice coefficient of 0.971 ± 0.005 for air, 0.936 ± 0.011 for soft tissue, and 0.803 ± 0.021 for bone. Furthermore, deep MRAC provides good PET results, with average errors of less than 1% in most brain regions. Significantly lower PET reconstruction errors were realized with deep MRAC (-0.7% ± 1.1) compared with Dixon-based soft-tissue and air segmentation (-5.8% ± 3.1) and anatomic CT-based template registration (-4.8% ± 2.2). Conclusion The authors developed an automated approach that allows generation of discrete-valued pseudo CT scans (soft tissue, bone, and air) from a single high-spatial-resolution diagnostic-quality three-dimensional MR image and evaluated it in brain PET/MR imaging. This deep learning approach for MR imaging-based AC provided reduced PET reconstruction error relative to a CT-based standard within the brain compared

  13. Psoriasis skin biopsy image segmentation using Deep Convolutional Neural Network.

    Science.gov (United States)

    Pal, Anabik; Garain, Utpal; Chandra, Aditi; Chatterjee, Raghunath; Senapati, Swapan

    2018-06-01

    Development of machine assisted tools for automatic analysis of psoriasis skin biopsy image plays an important role in clinical assistance. Development of automatic approach for accurate segmentation of psoriasis skin biopsy image is the initial prerequisite for developing such system. However, the complex cellular structure, presence of imaging artifacts, uneven staining variation make the task challenging. This paper presents a pioneering attempt for automatic segmentation of psoriasis skin biopsy images. Several deep neural architectures are tried for segmenting psoriasis skin biopsy images. Deep models are used for classifying the super-pixels generated by Simple Linear Iterative Clustering (SLIC) and the segmentation performance of these architectures is compared with the traditional hand-crafted feature based classifiers built on popularly used classifiers like K-Nearest Neighbor (KNN), Support Vector Machine (SVM) and Random Forest (RF). A U-shaped Fully Convolutional Neural Network (FCN) is also used in an end to end learning fashion where input is the original color image and the output is the segmentation class map for the skin layers. An annotated real psoriasis skin biopsy image data set of ninety (90) images is developed and used for this research. The segmentation performance is evaluated with two metrics namely, Jaccard's Coefficient (JC) and the Ratio of Correct Pixel Classification (RCPC) accuracy. The experimental results show that the CNN based approaches outperform the traditional hand-crafted feature based classification approaches. The present research shows that practical system can be developed for machine assisted analysis of psoriasis disease. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Deep Learning in Nuclear Medicine and Molecular Imaging: Current Perspectives and Future Directions.

    Science.gov (United States)

    Choi, Hongyoon

    2018-04-01

    Recent advances in deep learning have impacted various scientific and industrial fields. Due to the rapid application of deep learning in biomedical data, molecular imaging has also started to adopt this technique. In this regard, it is expected that deep learning will potentially affect the roles of molecular imaging experts as well as clinical decision making. This review firstly offers a basic overview of deep learning particularly for image data analysis to give knowledge to nuclear medicine physicians and researchers. Because of the unique characteristics and distinctive aims of various types of molecular imaging, deep learning applications can be different from other fields. In this context, the review deals with current perspectives of deep learning in molecular imaging particularly in terms of development of biomarkers. Finally, future challenges of deep learning application for molecular imaging and future roles of experts in molecular imaging will be discussed.

  15. Quicksilver: Fast predictive image registration - A deep learning approach.

    Science.gov (United States)

    Yang, Xiao; Kwitt, Roland; Styner, Martin; Niethammer, Marc

    2017-09-01

    This paper introduces Quicksilver, a fast deformable image registration method. Quicksilver registration for image-pairs works by patch-wise prediction of a deformation model based directly on image appearance. A deep encoder-decoder network is used as the prediction model. While the prediction strategy is general, we focus on predictions for the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model. Specifically, we predict the momentum-parameterization of LDDMM, which facilitates a patch-wise prediction strategy while maintaining the theoretical properties of LDDMM, such as guaranteed diffeomorphic mappings for sufficiently strong regularization. We also provide a probabilistic version of our prediction network which can be sampled during the testing time to calculate uncertainties in the predicted deformations. Finally, we introduce a new correction network which greatly increases the prediction accuracy of an already existing prediction network. We show experimental results for uni-modal atlas-to-image as well as uni-/multi-modal image-to-image registrations. These experiments demonstrate that our method accurately predicts registrations obtained by numerical optimization, is very fast, achieves state-of-the-art registration results on four standard validation datasets, and can jointly learn an image similarity measure. Quicksilver is freely available as an open-source software. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Constrained Deep Weak Supervision for Histopathology Image Segmentation.

    Science.gov (United States)

    Jia, Zhipeng; Huang, Xingyi; Chang, Eric I-Chao; Xu, Yan

    2017-11-01

    In this paper, we develop a new weakly supervised learning algorithm to learn to segment cancerous regions in histopathology images. This paper is under a multiple instance learning (MIL) framework with a new formulation, deep weak supervision (DWS); we also propose an effective way to introduce constraints to our neural networks to assist the learning process. The contributions of our algorithm are threefold: 1) we build an end-to-end learning system that segments cancerous regions with fully convolutional networks (FCNs) in which image-to-image weakly-supervised learning is performed; 2) we develop a DWS formulation to exploit multi-scale learning under weak supervision within FCNs; and 3) constraints about positive instances are introduced in our approach to effectively explore additional weakly supervised information that is easy to obtain and enjoy a significant boost to the learning process. The proposed algorithm, abbreviated as DWS-MIL, is easy to implement and can be trained efficiently. Our system demonstrates the state-of-the-art results on large-scale histopathology image data sets and can be applied to various applications in medical imaging beyond histopathology images, such as MRI, CT, and ultrasound images.

  17. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.

    Science.gov (United States)

    Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L

    2018-04-01

    In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

  18. Image annotation by deep neural networks with attention shaping

    Science.gov (United States)

    Zheng, Kexin; Lv, Shaohe; Ma, Fang; Chen, Fei; Jin, Chi; Dou, Yong

    2017-07-01

    Image annotation is a task of assigning semantic labels to an image. Recently, deep neural networks with visual attention have been utilized successfully in many computer vision tasks. In this paper, we show that conventional attention mechanism is easily misled by the salient class, i.e., the attended region always contains part of the image area describing the content of salient class at different attention iterations. To this end, we propose a novel attention shaping mechanism, which aims to maximize the non-overlapping area between consecutive attention processes by taking into account the history of previous attention vectors. Several weighting polices are studied to utilize the history information in different manners. In two benchmark datasets, i.e., PASCAL VOC2012 and MIRFlickr-25k, the average precision is improved by up to 10% in comparison with the state-of-the-art annotation methods.

  19. Deep convolutional networks for pancreas segmentation in CT imaging

    Science.gov (United States)

    Roth, Holger R.; Farag, Amal; Lu, Le; Turkbey, Evrim B.; Summers, Ronald M.

    2015-03-01

    Automatic organ segmentation is an important prerequisite for many computer-aided diagnosis systems. The high anatomical variability of organs in the abdomen, such as the pancreas, prevents many segmentation methods from achieving high accuracies when compared to state-of-the-art segmentation of organs like the liver, heart or kidneys. Recently, the availability of large annotated training sets and the accessibility of affordable parallel computing resources via GPUs have made it feasible for "deep learning" methods such as convolutional networks (ConvNets) to succeed in image classification tasks. These methods have the advantage that used classification features are trained directly from the imaging data. We present a fully-automated bottom-up method for pancreas segmentation in computed tomography (CT) images of the abdomen. The method is based on hierarchical coarse-to-fine classification of local image regions (superpixels). Superpixels are extracted from the abdominal region using Simple Linear Iterative Clustering (SLIC). An initial probability response map is generated, using patch-level confidences and a two-level cascade of random forest classifiers, from which superpixel regions with probabilities larger 0.5 are retained. These retained superpixels serve as a highly sensitive initial input of the pancreas and its surroundings to a ConvNet that samples a bounding box around each superpixel at different scales (and random non-rigid deformations at training time) in order to assign a more distinct probability of each superpixel region being pancreas or not. We evaluate our method on CT images of 82 patients (60 for training, 2 for validation, and 20 for testing). Using ConvNets we achieve maximum Dice scores of an average 68% +/- 10% (range, 43-80%) in testing. This shows promise for accurate pancreas segmentation, using a deep learning approach and compares favorably to state-of-the-art methods.

  20. Large-Scale Image Analytics Using Deep Learning

    Science.gov (United States)

    Ganguly, S.; Nemani, R. R.; Basu, S.; Mukhopadhyay, S.; Michaelis, A.; Votava, P.

    2014-12-01

    High resolution land cover classification maps are needed to increase the accuracy of current Land ecosystem and climate model outputs. Limited studies are in place that demonstrates the state-of-the-art in deriving very high resolution (VHR) land cover products. In addition, most methods heavily rely on commercial softwares that are difficult to scale given the region of study (e.g. continents to globe). Complexities in present approaches relate to (a) scalability of the algorithm, (b) large image data processing (compute and memory intensive), (c) computational cost, (d) massively parallel architecture, and (e) machine learning automation. In addition, VHR satellite datasets are of the order of terabytes and features extracted from these datasets are of the order of petabytes. In our present study, we have acquired the National Agricultural Imaging Program (NAIP) dataset for the Continental United States at a spatial resolution of 1-m. This data comes as image tiles (a total of quarter million image scenes with ~60 million pixels) and has a total size of ~100 terabytes for a single acquisition. Features extracted from the entire dataset would amount to ~8-10 petabytes. In our proposed approach, we have implemented a novel semi-automated machine learning algorithm rooted on the principles of "deep learning" to delineate the percentage of tree cover. In order to perform image analytics in such a granular system, it is mandatory to devise an intelligent archiving and query system for image retrieval, file structuring, metadata processing and filtering of all available image scenes. Using the Open NASA Earth Exchange (NEX) initiative, which is a partnership with Amazon Web Services (AWS), we have developed an end-to-end architecture for designing the database and the deep belief network (following the distbelief computing model) to solve a grand challenge of scaling this process across quarter million NAIP tiles that cover the entire Continental United States. The

  1. In vivo three-photon imaging of deep cerebellum

    Science.gov (United States)

    Wang, Mengran; Wang, Tianyu; Wu, Chunyan; Li, Bo; Ouzounov, Dimitre G.; Sinefeld, David; Guru, Akash; Nam, Hyung-Song; Capecchi, Mario R.; Warden, Melissa R.; Xu, Chris

    2018-02-01

    We demonstrate three-photon microscopy (3PM) of mouse cerebellum at 1 mm depth by imaging both blood vessels and neurons. We compared 3PM and 2PM in the mouse cerebellum for imaging green (using excitation sources at 1300 nm and 920 nm, respectively) and red fluorescence (using excitation sources at 1680 nm and 1064 nm, respectively). 3PM enabled deeper imaging than 2PM because the use of longer excitation wavelength reduces the scattering in biological tissue and the higher order nonlinear excitation provides better 3D localization. To illustrate these two advantages quantitatively, we measured the signal decay as well as the signal-to-background ratio (SBR) as a function of depth. We performed 2-photon imaging from the brain surface all the way down to the area where the SBR reaches 1, while at the same depth, 3PM still has SBR above 30. The segmented decay curve shows that the mouse cerebellum has different effective attenuation lengths at different depths, indicating heterogeneous tissue property for this brain region. We compared the third harmonic generation (THG) signal, which is used to visualize myelinated fibers, with the decay curve. We found that the regions with shorter effective attenuation lengths correspond to the regions with more fibers. Our results indicate that the widespread, non-uniformly distributed myelinated fibers adds heterogeneity to mouse cerebellum, which poses additional challenges in deep imaging of this brain region.

  2. Enhancing SDO/HMI images using deep learning

    Science.gov (United States)

    Baso, C. J. Díaz; Ramos, A. Asensio

    2018-06-01

    Context. The Helioseismic and Magnetic Imager (HMI) provides continuum images and magnetograms with a cadence better than one per minute. It has been continuously observing the Sun 24 h a day for the past 7 yr. The trade-off between full disk observations and spatial resolution means that HMI is not adequate for analyzing the smallest-scale events in the solar atmosphere. Aims: Our aim is to develop a new method to enhance HMI data, simultaneously deconvolving and super-resolving images and magnetograms. The resulting images will mimic observations with a diffraction-limited telescope twice the diameter of HMI. Methods: Our method, which we call Enhance, is based on two deep, fully convolutional neural networks that input patches of HMI observations and output deconvolved and super-resolved data. The neural networks are trained on synthetic data obtained from simulations of the emergence of solar active regions. Results: We have obtained deconvolved and super-resolved HMI images. To solve this ill-defined problem with infinite solutions we have used a neural network approach to add prior information from the simulations. We test Enhance against Hinode data that has been degraded to a 28 cm diameter telescope showing very good consistency. The code is open source.

  3. Automatic tissue image segmentation based on image processing and deep learning

    Science.gov (United States)

    Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting

    2018-02-01

    Image segmentation plays an important role in multimodality imaging, especially in fusion structural images offered by CT, MRI with functional images collected by optical technologies or other novel imaging technologies. Plus, image segmentation also provides detailed structure description for quantitative visualization of treating light distribution in the human body when incorporated with 3D light transport simulation method. Here we used image enhancement, operators, and morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in a deep learning way. We also introduced parallel computing. Such approaches greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. Our results can be used as a criteria when diagnosing diseases such as cerebral atrophy, which is caused by pathological changes in gray matter or white matter. We demonstrated the great potential of such image processing and deep leaning combined automatic tissue image segmentation in personalized medicine, especially in monitoring, and treatments.

  4. Using Deep Learning Model for Meteorological Satellite Cloud Image Prediction

    Science.gov (United States)

    Su, X.

    2017-12-01

    A satellite cloud image contains much weather information such as precipitation information. Short-time cloud movement forecast is important for precipitation forecast and is the primary means for typhoon monitoring. The traditional methods are mostly using the cloud feature matching and linear extrapolation to predict the cloud movement, which makes that the nonstationary process such as inversion and deformation during the movement of the cloud is basically not considered. It is still a hard task to predict cloud movement timely and correctly. As deep learning model could perform well in learning spatiotemporal features, to meet this challenge, we could regard cloud image prediction as a spatiotemporal sequence forecasting problem and introduce deep learning model to solve this problem. In this research, we use a variant of Gated-Recurrent-Unit(GRU) that has convolutional structures to deal with spatiotemporal features and build an end-to-end model to solve this forecast problem. In this model, both the input and output are spatiotemporal sequences. Compared to Convolutional LSTM(ConvLSTM) model, this model has lower amount of parameters. We imply this model on GOES satellite data and the model perform well.

  5. Lyman Break Galaxies in the Hubble Ultra Deep Field through Deep U-Band Imaging

    Science.gov (United States)

    Rafelski, Marc; Wolfe, A. M.; Cooke, J.; Chen, H. W.; Armandroff, T. E.; Wirth, G. D.

    2009-12-01

    We introduce an extremely deep U-band image taken of the Hubble Ultra Deep Field (HUDF), with a one sigma depth of 30.7 mag arcsec-2 and a detection limiting magnitude of 28 mag arcsec-2. The observations were carried out on the Keck I telescope using the LRIS-B detector. The U-band image substantially improves the accuracy of photometric redshift measurements of faint galaxies in the HUDF at z=[2.5,3.5]. The U-band for these galaxies is attenuated by lyman limit absorption, allowing for more reliable selections of candidate Lyman Break Galaxies (LBGs) than from photometric redshifts without U-band. We present a reliable sample of 300 LBGs at z=[2.5,3.5] in the HUDF. Accurate redshifts of faint galaxies at z=[2.5,3.5] are needed to obtain empirical constraints on the star formation efficiency of neutral gas at high redshift. Wolfe & Chen (2006) showed that the star formation rate (SFR) density in damped Ly-alpha absorption systems (DLAs) at z=[2.5,3.5] is significantly lower than predicted by the Kennicutt-Schmidt law for nearby galaxies. One caveat to this result that we wish to test is whether LBGs are embedded in DLAs. If in-situ star formation is occurring in DLAs, we would see it as extended low surface brightness emission around LBGs. We shall use the more accurate photometric redshifts to create a sample of LBGs around which we will look for extended emission in the more sensitive and higher resolution HUDF images. The absence of extended emission would put limits on the SFR density of DLAs associated with LBGs at high redshift. On the other hand, detection of faint emission on scales large compared to the bright LBG cores would indicate the presence of in situ star formation in those DLAs. Such gas would presumably fuel the higher star formation rates present in the LBG cores.

  6. Classification of CT brain images based on deep learning networks.

    Science.gov (United States)

    Gao, Xiaohong W; Hui, Rui; Tian, Zengmin

    2017-01-01

    While computerised tomography (CT) may have been the first imaging tool to study human brain, it has not yet been implemented into clinical decision making process for diagnosis of Alzheimer's disease (AD). On the other hand, with the nature of being prevalent, inexpensive and non-invasive, CT does present diagnostic features of AD to a great extent. This study explores the significance and impact on the application of the burgeoning deep learning techniques to the task of classification of CT brain images, in particular utilising convolutional neural network (CNN), aiming at providing supplementary information for the early diagnosis of Alzheimer's disease. Towards this end, three categories of CT images (N = 285) are clustered into three groups, which are AD, lesion (e.g. tumour) and normal ageing. In addition, considering the characteristics of this collection with larger thickness along the direction of depth (z) (~3-5 mm), an advanced CNN architecture is established integrating both 2D and 3D CNN networks. The fusion of the two CNN networks is subsequently coordinated based on the average of Softmax scores obtained from both networks consolidating 2D images along spatial axial directions and 3D segmented blocks respectively. As a result, the classification accuracy rates rendered by this elaborated CNN architecture are 85.2%, 80% and 95.3% for classes of AD, lesion and normal respectively with an average of 87.6%. Additionally, this improved CNN network appears to outperform the others when in comparison with 2D version only of CNN network as well as a number of state of the art hand-crafted approaches. As a result, these approaches deliver accuracy rates in percentage of 86.3, 85.6 ± 1.10, 86.3 ± 1.04, 85.2 ± 1.60, 83.1 ± 0.35 for 2D CNN, 2D SIFT, 2D KAZE, 3D SIFT and 3D KAZE respectively. The two major contributions of the paper constitute a new 3-D approach while applying deep learning technique to extract signature information

  7. Making beautiful deep-sky images astrophotography with affordable equipment and software

    CERN Document Server

    Parker, Greg

    2007-01-01

    This book is based around the author's beautiful and sometimes awe-inspiring color images and mosaics of deep-sky objects. The book describes how similar "Hubble class" images can be created by amateur astronomers in their back garden.

  8. Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image.

    Science.gov (United States)

    Xiang, Lei; Wang, Qian; Nie, Dong; Zhang, Lichi; Jin, Xiyao; Qiao, Yu; Shen, Dinggang

    2018-07-01

    Recently, more and more attention is drawn to the field of medical image synthesis across modalities. Among them, the synthesis of computed tomography (CT) image from T1-weighted magnetic resonance (MR) image is of great importance, although the mapping between them is highly complex due to large gaps of appearances of the two modalities. In this work, we aim to tackle this MR-to-CT synthesis task by a novel deep embedding convolutional neural network (DECNN). Specifically, we generate the feature maps from MR images, and then transform these feature maps forward through convolutional layers in the network. We can further compute a tentative CT synthesis from the midway of the flow of feature maps, and then embed this tentative CT synthesis result back to the feature maps. This embedding operation results in better feature maps, which are further transformed forward in DECNN. After repeating this embedding procedure for several times in the network, we can eventually synthesize a final CT image in the end of the DECNN. We have validated our proposed method on both brain and prostate imaging datasets, by also comparing with the state-of-the-art methods. Experimental results suggest that our DECNN (with repeated embedding operations) demonstrates its superior performances, in terms of both the perceptive quality of the synthesized CT image and the run-time cost for synthesizing a CT image. Copyright © 2018. Published by Elsevier B.V.

  9. Application of Deep Learning in Automated Analysis of Molecular Images in Cancer: A Survey

    Science.gov (United States)

    Xue, Yong; Chen, Shihui; Liu, Yong

    2017-01-01

    Molecular imaging enables the visualization and quantitative analysis of the alterations of biological procedures at molecular and/or cellular level, which is of great significance for early detection of cancer. In recent years, deep leaning has been widely used in medical imaging analysis, as it overcomes the limitations of visual assessment and traditional machine learning techniques by extracting hierarchical features with powerful representation capability. Research on cancer molecular images using deep learning techniques is also increasing dynamically. Hence, in this paper, we review the applications of deep learning in molecular imaging in terms of tumor lesion segmentation, tumor classification, and survival prediction. We also outline some future directions in which researchers may develop more powerful deep learning models for better performance in the applications in cancer molecular imaging. PMID:29114182

  10. Intelligent Detection of Structure from Remote Sensing Images Based on Deep Learning Method

    Science.gov (United States)

    Xin, L.

    2018-04-01

    Utilizing high-resolution remote sensing images for earth observation has become the common method of land use monitoring. It requires great human participation when dealing with traditional image interpretation, which is inefficient and difficult to guarantee the accuracy. At present, the artificial intelligent method such as deep learning has a large number of advantages in the aspect of image recognition. By means of a large amount of remote sensing image samples and deep neural network models, we can rapidly decipher the objects of interest such as buildings, etc. Whether in terms of efficiency or accuracy, deep learning method is more preponderant. This paper explains the research of deep learning method by a great mount of remote sensing image samples and verifies the feasibility of building extraction via experiments.

  11. A deep imaging survey of the Pleiades with ROSAT

    Science.gov (United States)

    Stauffer, J. R.; Caillault, J.-P.; Gagne, M.; Prosser, C. F.; Hartmann, L. W.

    1994-01-01

    We have obtained deep ROSAT images of three regions within the Pleiades open cluster. We have detected 317 X-ray sources in these ROSAT Position Sensitive Proportional Counter (PSPC) images, 171 of which we associate with certain or probable members of the Pleiades cluster. We detect nearly all Pleiades members with spectral types later than G0 and within 25 arcminutes of our three field centers where our sensitivity is highest. This has allowed us to derive for the first time the luminosity function for the G, K, amd M dwarfs of an open cluster without the need to use statistical techniques to account for the presence of upper limits in the data sample. Because of our high X-ray detection frequency down to the faint limit of the optical catalog, we suspect that some of our unidentified X-ray sources are previously unknown, very low-mass members of Pleiades. A large fraction of the Pleiades members detected with ROSAT have published rotational velocities. Plots of L(sub X)/L(sub Bol) versus spectroscopic rotational velocity show tightly correlated `saturation' type relations for stars with ((B - V)(sub 0)) greater than or equal to 0.60. For each of several color ranges, X-ray luminosities rise rapidly with increasing rotation rate until c sin i approximately equal to 15 km/sec, and then remains essentially flat for rotation rates up to at least v sin i approximately equal to 100 km/sec. The dispersion in rotation among low-mass stars in the Pleiades is by far the dominant contributor to the dispersion in L(sub X) at a given mass. Only about 35% of the B, A, and early F stars in the Pleiades are detected as X-ray sources in our survey. There is no correlation between X-ray flux and rotation for these stars. The X-ray luminosity function for the early-type Pleiades stars appears to be bimodal -- with only a few exceptions, we either detect these stars at fluxes in the range found for low-mass stars or we derive X-ray limits below the level found for most Pleiades

  12. Factoring variations in natural images with deep Gaussian mixture models

    OpenAIRE

    van den Oord, Aäron; Schrauwen, Benjamin

    2014-01-01

    Generative models can be seen as the swiss army knives of machine learning, as many problems can be written probabilistically in terms of the distribution of the data, including prediction, reconstruction, imputation and simulation. One of the most promising directions for unsupervised learning may lie in Deep Learning methods, given their success in supervised learning. However, one of the cur- rent problems with deep unsupervised learning methods, is that they often are harder to scale. As ...

  13. Deep Keck u-Band Imaging of the Hubble Ultra Deep Field: A Catalog of z ~ 3 Lyman Break Galaxies

    Science.gov (United States)

    Rafelski, Marc; Wolfe, Arthur M.; Cooke, Jeff; Chen, Hsiao-Wen; Armandroff, Taft E.; Wirth, Gregory D.

    2009-10-01

    We present a sample of 407 z ~ 3 Lyman break galaxies (LBGs) to a limiting isophotal u-band magnitude of 27.6 mag in the Hubble Ultra Deep Field. The LBGs are selected using a combination of photometric redshifts and the u-band drop-out technique enabled by the introduction of an extremely deep u-band image obtained with the Keck I telescope and the blue channel of the Low Resolution Imaging Spectrometer. The Keck u-band image, totaling 9 hr of integration time, has a 1σ depth of 30.7 mag arcsec-2, making it one of the most sensitive u-band images ever obtained. The u-band image also substantially improves the accuracy of photometric redshift measurements of ~50% of the z ~ 3 LBGs, significantly reducing the traditional degeneracy of colors between z ~ 3 and z ~ 0.2 galaxies. This sample provides the most sensitive, high-resolution multi-filter imaging of reliably identified z ~ 3 LBGs for morphological studies of galaxy formation and evolution and the star formation efficiency of gas at high redshift.

  14. A novel biomedical image indexing and retrieval system via deep preference learning.

    Science.gov (United States)

    Pang, Shuchao; Orgun, Mehmet A; Yu, Zhezhou

    2018-05-01

    The traditional biomedical image retrieval methods as well as content-based image retrieval (CBIR) methods originally designed for non-biomedical images either only consider using pixel and low-level features to describe an image or use deep features to describe images but still leave a lot of room for improving both accuracy and efficiency. In this work, we propose a new approach, which exploits deep learning technology to extract the high-level and compact features from biomedical images. The deep feature extraction process leverages multiple hidden layers to capture substantial feature structures of high-resolution images and represent them at different levels of abstraction, leading to an improved performance for indexing and retrieval of biomedical images. We exploit the current popular and multi-layered deep neural networks, namely, stacked denoising autoencoders (SDAE) and convolutional neural networks (CNN) to represent the discriminative features of biomedical images by transferring the feature representations and parameters of pre-trained deep neural networks from another domain. Moreover, in order to index all the images for finding the similarly referenced images, we also introduce preference learning technology to train and learn a kind of a preference model for the query image, which can output the similarity ranking list of images from a biomedical image database. To the best of our knowledge, this paper introduces preference learning technology for the first time into biomedical image retrieval. We evaluate the performance of two powerful algorithms based on our proposed system and compare them with those of popular biomedical image indexing approaches and existing regular image retrieval methods with detailed experiments over several well-known public biomedical image databases. Based on different criteria for the evaluation of retrieval performance, experimental results demonstrate that our proposed algorithms outperform the state

  15. DIRECT IMAGING CONFIRMATION AND CHARACTERIZATION OF A DUST-ENSHROUDED CANDIDATE EXOPLANET ORBITING FOMALHAUT

    Energy Technology Data Exchange (ETDEWEB)

    Currie, Thayne [Department of Astronomy and Astrophysics, University of Toronto, Toronto, ON (Canada); Debes, John [Space Telescope Science Institute, Baltimore, MD (United States); Rodigas, Timothy J. [Steward Observatory, University of Arizona, Tucson, AZ (United States); Burrows, Adam [Department of Astrophysical Sciences, Princeton University, Princeton, NJ (United States); Itoh, Yoichi [Nishi-Harima Observatory, University of Hyogo, Kobe (Japan); Fukagawa, Misato [Department of Earth and Space Sciences, Osaka University, Osaka (Japan); Kenyon, Scott J. [Smithsonian Astrophysical Observatory, Cambridge, MA (United States); Kuchner, Marc [Stellar and Exoplanets Laboratory, NASA-Goddard Space Flight Center, Greenbelt, MD (United States); Matsumura, Soko, E-mail: currie@astro.utoronto.ca [Department of Astronomy, University of Maryland-College Park, College Park, MD (United States)

    2012-12-01

    We present Subaru/IRCS J-band data for Fomalhaut and a (re)reduction of archival 2004-2006 HST/ACS data first presented by Kalas et al. We confirm the existence of a candidate exoplanet, Fomalhaut b, in both the 2004 and 2006 F606W data sets at a high signal-to-noise ratio. Additionally, we confirm the detection at F814W and present a new detection in F435W. Fomalhaut b's space motion may be consistent with it being in an apsidally aligned, non-debris ring-crossing orbit, although new astrometry is required for firmer conclusions. We cannot confirm that Fomalhaut b exhibits 0.7-0.8 mag variability cited as evidence for planet accretion or a semi-transient dust cloud. The new, combined optical spectral energy distribution and IR upper limits confirm that emission identifying Fomalhaut b originates from starlight scattered by small dust, but this dust is most likely associated with a massive body. The Subaru and IRAC/4.5 {mu}m upper limits imply M < 2 M{sub J} , still consistent with the range of Fomalhaut b masses needed to sculpt the disk. Fomalhaut b is very plausibly 'a planet identified from direct imaging' even if current images of it do not, strictly speaking, show thermal emission from a directly imaged planet.

  16. DIRECT IMAGING CONFIRMATION AND CHARACTERIZATION OF A DUST-ENSHROUDED CANDIDATE EXOPLANET ORBITING FOMALHAUT

    International Nuclear Information System (INIS)

    Currie, Thayne; Debes, John; Rodigas, Timothy J.; Burrows, Adam; Itoh, Yoichi; Fukagawa, Misato; Kenyon, Scott J.; Kuchner, Marc; Matsumura, Soko

    2012-01-01

    We present Subaru/IRCS J-band data for Fomalhaut and a (re)reduction of archival 2004-2006 HST/ACS data first presented by Kalas et al. We confirm the existence of a candidate exoplanet, Fomalhaut b, in both the 2004 and 2006 F606W data sets at a high signal-to-noise ratio. Additionally, we confirm the detection at F814W and present a new detection in F435W. Fomalhaut b's space motion may be consistent with it being in an apsidally aligned, non-debris ring-crossing orbit, although new astrometry is required for firmer conclusions. We cannot confirm that Fomalhaut b exhibits 0.7-0.8 mag variability cited as evidence for planet accretion or a semi-transient dust cloud. The new, combined optical spectral energy distribution and IR upper limits confirm that emission identifying Fomalhaut b originates from starlight scattered by small dust, but this dust is most likely associated with a massive body. The Subaru and IRAC/4.5 μm upper limits imply M J , still consistent with the range of Fomalhaut b masses needed to sculpt the disk. Fomalhaut b is very plausibly 'a planet identified from direct imaging' even if current images of it do not, strictly speaking, show thermal emission from a directly imaged planet.

  17. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing

    Directory of Open Access Journals (Sweden)

    Fan Zhang

    2016-04-01

    Full Text Available With the development of synthetic aperture radar (SAR technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO. However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  18. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.

    Science.gov (United States)

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-04-07

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  19. Application of deep learning to the classification of images from colposcopy.

    Science.gov (United States)

    Sato, Masakazu; Horie, Koji; Hara, Aki; Miyamoto, Yuichiro; Kurihara, Kazuko; Tomio, Kensuke; Yokota, Harushige

    2018-03-01

    The objective of the present study was to investigate whether deep learning could be applied successfully to the classification of images from colposcopy. For this purpose, a total of 158 patients who underwent conization were enrolled, and medical records and data from the gynecological oncology database were retrospectively reviewed. Deep learning was performed with the Keras neural network and TensorFlow libraries. Using preoperative images from colposcopy as the input data and deep learning technology, the patients were classified into three groups [severe dysplasia, carcinoma in situ (CIS) and invasive cancer (IC)]. A total of 485 images were obtained for the analysis, of which 142 images were of severe dysplasia (2.9 images/patient), 257 were of CIS (3.3 images/patient), and 86 were of IC (4.1 images/patient). Of these, 233 images were captured with a green filter, and the remaining 252 were captured without a green filter. Following the application of L2 regularization, L1 regularization, dropout and data augmentation, the accuracy of the validation dataset was ~50%. Although the present study is preliminary, the results indicated that deep learning may be applied to classify colposcopy images.

  20. NiftyNet: a deep-learning platform for medical imaging.

    Science.gov (United States)

    Gibson, Eli; Li, Wenqi; Sudre, Carole; Fidon, Lucas; Shakir, Dzhoshkun I; Wang, Guotai; Eaton-Rosen, Zach; Gray, Robert; Doel, Tom; Hu, Yipeng; Whyntie, Tom; Nachev, Parashkev; Modat, Marc; Barratt, Dean C; Ourselin, Sébastien; Cardoso, M Jorge; Vercauteren, Tom

    2018-05-01

    Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. The NiftyNet infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on the TensorFlow framework and supports features such as TensorBoard visualization of 2D and 3D images and computational graphs by default. We present three illustrative medical image analysis applications built using NiftyNet infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. The NiftyNet infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new

  1. Blind CT image quality assessment via deep learning strategy: initial study

    Science.gov (United States)

    Li, Sui; He, Ji; Wang, Yongbo; Liao, Yuting; Zeng, Dong; Bian, Zhaoying; Ma, Jianhua

    2018-03-01

    Computed Tomography (CT) is one of the most important medical imaging modality. CT images can be used to assist in the detection and diagnosis of lesions and to facilitate follow-up treatment. However, CT images are vulnerable to noise. Actually, there are two major source intrinsically causing the CT data noise, i.e., the X-ray photo statistics and the electronic noise background. Therefore, it is necessary to doing image quality assessment (IQA) in CT imaging before diagnosis and treatment. Most of existing CT images IQA methods are based on human observer study. However, these methods are impractical in clinical for their complex and time-consuming. In this paper, we presented a blind CT image quality assessment via deep learning strategy. A database of 1500 CT images is constructed, containing 300 high-quality images and 1200 corresponding noisy images. Specifically, the high-quality images were used to simulate the corresponding noisy images at four different doses. Then, the images are scored by the experienced radiologists by the following attributes: image noise, artifacts, edge and structure, overall image quality, and tumor size and boundary estimation with five-point scale. We trained a network for learning the non-liner map from CT images to subjective evaluation scores. Then, we load the pre-trained model to yield predicted score from the test image. To demonstrate the performance of the deep learning network in IQA, correlation coefficients: Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are utilized. And the experimental result demonstrate that the presented deep learning based IQA strategy can be used in the CT image quality assessment.

  2. StegNet: Mega Image Steganography Capacity with Deep Convolutional Network

    Directory of Open Access Journals (Sweden)

    Pin Wu

    2018-06-01

    Full Text Available Traditional image steganography often leans interests towards safely embedding hidden information into cover images with payload capacity almost neglected. This paper combines recent deep convolutional neural network methods with image-into-image steganography. It successfully hides the same size images with a decoding rate of 98.2% or bpp (bits per pixel of 23.57 by changing only 0.76% of the cover image on average. Our method directly learns end-to-end mappings between the cover image and the embedded image and between the hidden image and the decoded image. We further show that our embedded image, while with mega payload capacity, is still robust to statistical analysis.

  3. Deep Hashing Based Fusing Index Method for Large-Scale Image Retrieval

    Directory of Open Access Journals (Sweden)

    Lijuan Duan

    2017-01-01

    Full Text Available Hashing has been widely deployed to perform the Approximate Nearest Neighbor (ANN search for the large-scale image retrieval to solve the problem of storage and retrieval efficiency. Recently, deep hashing methods have been proposed to perform the simultaneous feature learning and the hash code learning with deep neural networks. Even though deep hashing has shown the better performance than traditional hashing methods with handcrafted features, the learned compact hash code from one deep hashing network may not provide the full representation of an image. In this paper, we propose a novel hashing indexing method, called the Deep Hashing based Fusing Index (DHFI, to generate a more compact hash code which has stronger expression ability and distinction capability. In our method, we train two different architecture’s deep hashing subnetworks and fuse the hash codes generated by the two subnetworks together to unify images. Experiments on two real datasets show that our method can outperform state-of-the-art image retrieval applications.

  4. Deep erosions of the palmar aspect of the navicular bone diagnosed by standing magnetic resonance imaging.

    Science.gov (United States)

    Sherlock, C; Mair, T; Blunden, T

    2008-11-01

    Erosion of the palmar (flexor) aspect of the navicular bone is difficult to diagnose with conventional imaging techniques. To review the clinical, magnetic resonance (MR) and pathological features of deep erosions of the palmar aspect of the navicular bone. Cases of deep erosions of the palmar aspect of the navicular bone, diagnosed by standing low field MR imaging, were selected. Clinical details, results of diagnostic procedures, MR features and pathological findings were reviewed. Deep erosions of the palmar aspect of the navicular bone were diagnosed in 16 mature horses, 6 of which were bilaterally lame. Sudden onset of lameness was recorded in 63%. Radiography prior to MR imaging showed equivocal changes in 7 horses. The MR features consisted of focal areas of intermediate or high signal intensity on T1-, T2*- and T2-weighted images and STIR images affecting the dorsal aspect of the deep digital flexor tendon, the fibrocartilage of the palmar aspect, subchondral compact bone and medulla of the navicular bone. On follow-up, 7/16 horses (44%) had been subjected to euthanasia and only one was being worked at its previous level. Erosions of the palmar aspect of the navicular bone were confirmed post mortem in 2 horses. Histologically, the lesions were characterised by localised degeneration of fibrocartilage with underlying focal osteonecrosis and fibroplasia. The adjacent deep digital flexor tendon showed fibril formation and fibrocartilaginous metaplasia. Deep erosions of the palmar aspect of the navicular bone are more easily diagnosed by standing low field MR imaging than by conventional radiography. The lesions involve degeneration of the palmar fibrocartilage with underlying osteonecrosis and fibroplasia affecting the subchondral compact bone and medulla, and carry a poor prognosis for return to performance. Diagnosis of shallow erosive lesions of the palmar fibrocartilage may allow therapeutic intervention earlier in the disease process, thereby preventing

  5. CMOS Image Sensor and System for Imaging Hemodynamic Changes in Response to Deep Brain Stimulation.

    Science.gov (United States)

    Zhang, Xiao; Noor, Muhammad S; McCracken, Clinton B; Kiss, Zelma H T; Yadid-Pecht, Orly; Murari, Kartikeya

    2016-06-01

    Deep brain stimulation (DBS) is a therapeutic intervention used for a variety of neurological and psychiatric disorders, but its mechanism of action is not well understood. It is known that DBS modulates neural activity which changes metabolic demands and thus the cerebral circulation state. However, it is unclear whether there are correlations between electrophysiological, hemodynamic and behavioral changes and whether they have any implications for clinical benefits. In order to investigate these questions, we present a miniaturized system for spectroscopic imaging of brain hemodynamics. The system consists of a 144 ×144, [Formula: see text] pixel pitch, high-sensitivity, analog-output CMOS imager fabricated in a standard 0.35 μm CMOS process, along with a miniaturized imaging system comprising illumination, focusing, analog-to-digital conversion and μSD card based data storage. This enables stand alone operation without a computer, nor electrical or fiberoptic tethers. To achieve high sensitivity, the pixel uses a capacitive transimpedance amplifier (CTIA). The nMOS transistors are in the pixel while pMOS transistors are column-parallel, resulting in a fill factor (FF) of 26%. Running at 60 fps and exposed to 470 nm light, the CMOS imager has a minimum detectable intensity of 2.3 nW/cm(2) , a maximum signal-to-noise ratio (SNR) of 49 dB at 2.45 μW/cm(2) leading to a dynamic range (DR) of 61 dB while consuming 167 μA from a 3.3 V supply. In anesthetized rats, the system was able to detect temporal, spatial and spectral hemodynamic changes in response to DBS.

  6. Automatic Image-Based Plant Disease Severity Estimation Using Deep Learning

    Directory of Open Access Journals (Sweden)

    Guan Wang

    2017-01-01

    Full Text Available Automatic and accurate estimation of disease severity is essential for food security, disease management, and yield loss prediction. Deep learning, the latest breakthrough in computer vision, is promising for fine-grained disease severity classification, as the method avoids the labor-intensive feature engineering and threshold-based segmentation. Using the apple black rot images in the PlantVillage dataset, which are further annotated by botanists with four severity stages as ground truth, a series of deep convolutional neural networks are trained to diagnose the severity of the disease. The performances of shallow networks trained from scratch and deep models fine-tuned by transfer learning are evaluated systemically in this paper. The best model is the deep VGG16 model trained with transfer learning, which yields an overall accuracy of 90.4% on the hold-out test set. The proposed deep learning model may have great potential in disease control for modern agriculture.

  7. Automatic Image-Based Plant Disease Severity Estimation Using Deep Learning.

    Science.gov (United States)

    Wang, Guan; Sun, Yu; Wang, Jianxin

    2017-01-01

    Automatic and accurate estimation of disease severity is essential for food security, disease management, and yield loss prediction. Deep learning, the latest breakthrough in computer vision, is promising for fine-grained disease severity classification, as the method avoids the labor-intensive feature engineering and threshold-based segmentation. Using the apple black rot images in the PlantVillage dataset, which are further annotated by botanists with four severity stages as ground truth, a series of deep convolutional neural networks are trained to diagnose the severity of the disease. The performances of shallow networks trained from scratch and deep models fine-tuned by transfer learning are evaluated systemically in this paper. The best model is the deep VGG16 model trained with transfer learning, which yields an overall accuracy of 90.4% on the hold-out test set. The proposed deep learning model may have great potential in disease control for modern agriculture.

  8. Deep Learning- and Transfer Learning-Based Super Resolution Reconstruction from Single Medical Image

    Directory of Open Access Journals (Sweden)

    YiNan Zhang

    2017-01-01

    Full Text Available Medical images play an important role in medical diagnosis and research. In this paper, a transfer learning- and deep learning-based super resolution reconstruction method is introduced. The proposed method contains one bicubic interpolation template layer and two convolutional layers. The bicubic interpolation template layer is prefixed by mathematics deduction, and two convolutional layers learn from training samples. For saving training medical images, a SIFT feature-based transfer learning method is proposed. Not only can medical images be used to train the proposed method, but also other types of images can be added into training dataset selectively. In empirical experiments, results of eight distinctive medical images show improvement of image quality and time reduction. Further, the proposed method also produces slightly sharper edges than other deep learning approaches in less time and it is projected that the hybrid architecture of prefixed template layer and unfixed hidden layers has potentials in other applications.

  9. Applying Deep Learning in Medical Images: The Case of Bone Age Estimation.

    Science.gov (United States)

    Lee, Jang Hyung; Kim, Kwang Gi

    2018-01-01

    A diagnostic need often arises to estimate bone age from X-ray images of the hand of a subject during the growth period. Together with measured physical height, such information may be used as indicators for the height growth prognosis of the subject. We present a way to apply the deep learning technique to medical image analysis using hand bone age estimation as an example. Age estimation was formulated as a regression problem with hand X-ray images as input and estimated age as output. A set of hand X-ray images was used to form a training set with which a regression model was trained. An image preprocessing procedure is described which reduces image variations across data instances that are unrelated to age-wise variation. The use of Caffe, a deep learning tool is demonstrated. A rather simple deep learning network was adopted and trained for tutorial purpose. A test set distinct from the training set was formed to assess the validity of the approach. The measured mean absolute difference value was 18.9 months, and the concordance correlation coefficient was 0.78. It is shown that the proposed deep learning-based neural network can be used to estimate a subject's age from hand X-ray images, which eliminates the need for tedious atlas look-ups in clinical environments and should improve the time and cost efficiency of the estimation process.

  10. DeepPIV: Particle image velocimetry measurements using deep-sea, remotely operated vehicles

    Science.gov (United States)

    Katija, Kakani; Sherman, Alana; Graves, Dale; Klimov, Denis; Kecy, Chad; Robison, Bruce

    2015-11-01

    The midwater region of the ocean (below the euphotic zone and above the benthos) is one of the largest ecosystems on our planet, yet remains one of the least explored. Little-known marine organisms that inhabit midwater have developed life strategies that contribute to their evolutionary success, and may inspire engineering solutions for societally relevant challenges. Although significant advances in underwater vehicle technologies have improved access to midwater, small-scale, in situ fluid mechanics measurement methods that seek to quantify the interactions that midwater organisms have with their physical environment are lacking. Here we present DeepPIV, an instrumentation package affixed to remotely operated vehicles that quantifies fluid motions from the surface of the ocean down to 4000 m depths. Utilizing ambient suspended particulate, fluid-structure interactions are evaluated on a range of marine organisms in midwater. Initial science targets include larvaceans, biological equivalents of flapping flexible foils, that create mucus houses to filter food. Little is known about the structure of these mucus houses and the function they play in selectively filtering particles, and these dynamics can serve as particle-mucus models for human health. Using DeepPIV, we reveal the complex structures and flows generated within larvacean mucus houses, and elucidate how these structures function. Funding is gratefully acknowledged from the Packard Foundation.

  11. Local Deep Hashing Matching of Aerial Images Based on Relative Distance and Absolute Distance Constraints

    Directory of Open Access Journals (Sweden)

    Suting Chen

    2017-12-01

    Full Text Available Aerial images have features of high resolution, complex background, and usually require large amounts of calculation, however, most algorithms used in matching of aerial images adopt the shallow hand-crafted features expressed as floating-point descriptors (e.g., SIFT (Scale-invariant Feature Transform, SURF (Speeded Up Robust Features, which may suffer from poor matching speed and are not well represented in the literature. Here, we propose a novel Local Deep Hashing Matching (LDHM method for matching of aerial images with large size and with lower complexity or fast matching speed. The basic idea of the proposed algorithm is to utilize the deep network model in the local area of the aerial images, and study the local features, as well as the hash function of the images. Firstly, according to the course overlap rate of aerial images, the algorithm extracts the local areas for matching to avoid the processing of redundant information. Secondly, a triplet network structure is proposed to mine the deep features of the patches of the local image, and the learned features are imported to the hash layer, thus obtaining the representation of a binary hash code. Thirdly, the constraints of the positive samples to the absolute distance are added on the basis of the triplet loss, a new objective function is constructed to optimize the parameters of the network and enhance the discriminating capabilities of image patch features. Finally, the obtained deep hash code of each image patch is used for the similarity comparison of the image patches in the Hamming space to complete the matching of aerial images. The proposed LDHM algorithm evaluates the UltraCam-D dataset and a set of actual aerial images, simulation result demonstrates that it may significantly outperform the state-of-the-art algorithm in terms of the efficiency and performance.

  12. Moving object detection in video satellite image based on deep learning

    Science.gov (United States)

    Zhang, Xueyang; Xiang, Junhua

    2017-11-01

    Moving object detection in video satellite image is studied. A detection algorithm based on deep learning is proposed. The small scale characteristics of remote sensing video objects are analyzed. Firstly, background subtraction algorithm of adaptive Gauss mixture model is used to generate region proposals. Then the objects in region proposals are classified via the deep convolutional neural network. Thus moving objects of interest are detected combined with prior information of sub-satellite point. The deep convolution neural network employs a 21-layer residual convolutional neural network, and trains the network parameters by transfer learning. Experimental results about video from Tiantuo-2 satellite demonstrate the effectiveness of the algorithm.

  13. Quantum dots versus organic fluorophores in fluorescent deep-tissue imaging--merits and demerits.

    Science.gov (United States)

    Bakalova, Rumiana; Zhelev, Zhivko; Gadjeva, Veselina

    2008-12-01

    The use of fluorescence in deep-tissue imaging is rapidly expanding in last several years. The progress in fluorescent molecular probes and fluorescent imaging techniques gives an opportunity to detect single cells and even molecular targets in live organisms. The highly sensitive and high-speed fluorescent molecular sensors and detection devices allow the application of fluorescence in functional imaging. With the development of novel bright fluorophores based on nanotechnologies and 3D fluorescence scanners with high spatial and temporal resolution, the fluorescent imaging has a potential to become an alternative of the other non-invasive imaging techniques as magnetic resonance imaging, positron-emission tomography, X-ray, computing tomography. The fluorescent imaging has also a potential to give a real map of human anatomy and physiology. The current review outlines the advantages of fluorescent nanoparticles over conventional organic dyes in deep-tissue imaging in vivo and defines the major requirements to the "perfect fluorophore". The analysis proceeds from the basic principles of fluorescence and major characteristics of fluorophores, light-tissue interactions, and major limitations of fluorescent deep-tissue imaging. The article is addressed to a broad readership - from specialists in this field to university students.

  14. Rock images classification by using deep convolution neural network

    Science.gov (United States)

    Cheng, Guojian; Guo, Wenhui

    2017-08-01

    Granularity analysis is one of the most essential issues in authenticate under microscope. To improve the efficiency and accuracy of traditional manual work, an convolutional neural network based method is proposed for granularity analysis from thin section image, which chooses and extracts features from image samples while build classifier to recognize granularity of input image samples. 4800 samples from Ordos basin are used for experiments under colour spaces of HSV, YCbCr and RGB respectively. On the test dataset, the correct rate in RGB colour space is 98.5%, and it is believable in HSV and YCbCr colour space. The results show that the convolution neural network can classify the rock images with high reliability.

  15. Molecular imaging needles: dual-modality optical coherence tomography and fluorescence imaging of labeled antibodies deep in tissue

    Science.gov (United States)

    Scolaro, Loretta; Lorenser, Dirk; Madore, Wendy-Julie; Kirk, Rodney W.; Kramer, Anne S.; Yeoh, George C.; Godbout, Nicolas; Sampson, David D.; Boudoux, Caroline; McLaughlin, Robert A.

    2015-01-01

    Molecular imaging using optical techniques provides insight into disease at the cellular level. In this paper, we report on a novel dual-modality probe capable of performing molecular imaging by combining simultaneous three-dimensional optical coherence tomography (OCT) and two-dimensional fluorescence imaging in a hypodermic needle. The probe, referred to as a molecular imaging (MI) needle, may be inserted tens of millimeters into tissue. The MI needle utilizes double-clad fiber to carry both imaging modalities, and is interfaced to a 1310-nm OCT system and a fluorescence imaging subsystem using an asymmetrical double-clad fiber coupler customized to achieve high fluorescence collection efficiency. We present, to the best of our knowledge, the first dual-modality OCT and fluorescence needle probe with sufficient sensitivity to image fluorescently labeled antibodies. Such probes enable high-resolution molecular imaging deep within tissue. PMID:26137379

  16. A Plane Target Detection Algorithm in Remote Sensing Images based on Deep Learning Network Technology

    Science.gov (United States)

    Shuxin, Li; Zhilong, Zhang; Biao, Li

    2018-01-01

    Plane is an important target category in remote sensing targets and it is of great value to detect the plane targets automatically. As remote imaging technology developing continuously, the resolution of the remote sensing image has been very high and we can get more detailed information for detecting the remote sensing targets automatically. Deep learning network technology is the most advanced technology in image target detection and recognition, which provided great performance improvement in the field of target detection and recognition in the everyday scenes. We combined the technology with the application in the remote sensing target detection and proposed an algorithm with end to end deep network, which can learn from the remote sensing images to detect the targets in the new images automatically and robustly. Our experiments shows that the algorithm can capture the feature information of the plane target and has better performance in target detection with the old methods.

  17. Confocal multispot microscope for fast and deep imaging in semicleared tissues

    Science.gov (United States)

    Adam, Marie-Pierre; Müllenbroich, Marie Caroline; Di Giovanna, Antonino Paolo; Alfieri, Domenico; Silvestri, Ludovico; Sacconi, Leonardo; Pavone, Francesco Saverio

    2018-02-01

    Although perfectly transparent specimens are imaged faster with light-sheet microscopy, less transparent samples are often imaged with two-photon microscopy leveraging its robustness to scattering; however, at the price of increased acquisition times. Clearing methods that are capable of rendering strongly scattering samples such as brain tissue perfectly transparent specimens are often complex, costly, and time intensive, even though for many applications a slightly lower level of tissue transparency is sufficient and easily achieved with simpler and faster methods. Here, we present a microscope type that has been geared toward the imaging of semicleared tissue by combining multispot two-photon excitation with rolling shutter wide-field detection to image deep and fast inside semicleared mouse brain. We present a theoretical and experimental evaluation of the point spread function and contrast as a function of shutter size. Finally, we demonstrate microscope performance in fixed brain slices by imaging dendritic spines up to 400-μm deep.

  18. In vivo deep brain imaging of rats using oral-cavity illuminated photoacoustic computed tomography

    Science.gov (United States)

    Lin, Li; Xia, Jun; Wong, Terence T. W.; Zhang, Ruiying; Wang, Lihong V.

    2015-03-01

    We demonstrate, by means of internal light delivery, photoacoustic imaging of the deep brain of rats in vivo. With fiber illumination via the oral cavity, we delivered light directly into the bottom of the brain, much more than can be delivered by external illumination. The study was performed using a photoacoustic computed tomography (PACT) system equipped with a 512-element full-ring transducer array, providing a full two-dimensional view aperture. Using internal illumination, the PACT system provided clear cross sectional photoacoustic images from the palate to the middle brain of live rats, revealing deep brain structures such as the hypothalamus, brain stem, and cerebral medulla.

  19. STUDY ON THE CLASSIFICATION OF GAOFEN-3 POLARIMETRIC SAR IMAGES USING DEEP NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    J. Zhang

    2018-04-01

    Full Text Available Polarimetric Synthetic Aperture Radar(POLSAR) imaging principle determines that the image quality will be affected by speckle noise. So the recognition accuracy of traditional image classification methods will be reduced by the effect of this interference. Since the date of submission, Deep Convolutional Neural Network impacts on the traditional image processing methods and brings the field of computer vision to a new stage with the advantages of a strong ability to learn deep features and excellent ability to fit large datasets. Based on the basic characteristics of polarimetric SAR images, the paper studied the types of the surface cover by using the method of Deep Learning. We used the fully polarimetric SAR features of different scales to fuse RGB images to the GoogLeNet model based on convolution neural network Iterative training, and then use the trained model to test the classification of data validation.First of all, referring to the optical image, we mark the surface coverage type of GF-3 POLSAR image with 8m resolution, and then collect the samples according to different categories. To meet the GoogLeNet model requirements of 256 × 256 pixel image input and taking into account the lack of full-resolution SAR resolution, the original image should be pre-processed in the process of resampling. In this paper, POLSAR image slice samples of different scales with sampling intervals of 2 m and 1 m to be trained separately and validated by the verification dataset. Among them, the training accuracy of GoogLeNet model trained with resampled 2-m polarimetric SAR image is 94.89 %, and that of the trained SAR image with resampled 1 m is 92.65 %.

  20. Study on the Classification of GAOFEN-3 Polarimetric SAR Images Using Deep Neural Network

    Science.gov (United States)

    Zhang, J.; Zhang, J.; Zhao, Z.

    2018-04-01

    Polarimetric Synthetic Aperture Radar (POLSAR) imaging principle determines that the image quality will be affected by speckle noise. So the recognition accuracy of traditional image classification methods will be reduced by the effect of this interference. Since the date of submission, Deep Convolutional Neural Network impacts on the traditional image processing methods and brings the field of computer vision to a new stage with the advantages of a strong ability to learn deep features and excellent ability to fit large datasets. Based on the basic characteristics of polarimetric SAR images, the paper studied the types of the surface cover by using the method of Deep Learning. We used the fully polarimetric SAR features of different scales to fuse RGB images to the GoogLeNet model based on convolution neural network Iterative training, and then use the trained model to test the classification of data validation.First of all, referring to the optical image, we mark the surface coverage type of GF-3 POLSAR image with 8m resolution, and then collect the samples according to different categories. To meet the GoogLeNet model requirements of 256 × 256 pixel image input and taking into account the lack of full-resolution SAR resolution, the original image should be pre-processed in the process of resampling. In this paper, POLSAR image slice samples of different scales with sampling intervals of 2 m and 1 m to be trained separately and validated by the verification dataset. Among them, the training accuracy of GoogLeNet model trained with resampled 2-m polarimetric SAR image is 94.89 %, and that of the trained SAR image with resampled 1 m is 92.65 %.

  1. Rotation invariant deep binary hashing for fast image retrieval

    Science.gov (United States)

    Dai, Lai; Liu, Jianming; Jiang, Aiwen

    2017-07-01

    In this paper, we study how to compactly represent image's characteristics for fast image retrieval. We propose supervised rotation invariant compact discriminative binary descriptors through combining convolutional neural network with hashing. In the proposed network, binary codes are learned by employing a hidden layer for representing latent concepts that dominate on class labels. A loss function is proposed to minimize the difference between binary descriptors that describe reference image and the rotated one. Compared with some other supervised methods, the proposed network doesn't have to require pair-wised inputs for binary code learning. Experimental results show that our method is effective and achieves state-of-the-art results on the CIFAR-10 and MNIST datasets.

  2. Deep into the Brain: Artificial Intelligence in Stroke Imaging.

    Science.gov (United States)

    Lee, Eun-Jae; Kim, Yong-Hwan; Kim, Namkug; Kang, Dong-Wha

    2017-09-01

    Artificial intelligence (AI), a computer system aiming to mimic human intelligence, is gaining increasing interest and is being incorporated into many fields, including medicine. Stroke medicine is one such area of application of AI, for improving the accuracy of diagnosis and the quality of patient care. For stroke management, adequate analysis of stroke imaging is crucial. Recently, AI techniques have been applied to decipher the data from stroke imaging and have demonstrated some promising results. In the very near future, such AI techniques may play a pivotal role in determining the therapeutic methods and predicting the prognosis for stroke patients in an individualized manner. In this review, we offer a glimpse at the use of AI in stroke imaging, specifically focusing on its technical principles, clinical application, and future perspectives.

  3. Super-nonlinear fluorescence microscopy for high-contrast deep tissue imaging

    Science.gov (United States)

    Wei, Lu; Zhu, Xinxin; Chen, Zhixing; Min, Wei

    2014-02-01

    Two-photon excited fluorescence microscopy (TPFM) offers the highest penetration depth with subcellular resolution in light microscopy, due to its unique advantage of nonlinear excitation. However, a fundamental imaging-depth limit, accompanied by a vanishing signal-to-background contrast, still exists for TPFM when imaging deep into scattering samples. Formally, the focusing depth, at which the in-focus signal and the out-of-focus background are equal to each other, is defined as the fundamental imaging-depth limit. To go beyond this imaging-depth limit of TPFM, we report a new class of super-nonlinear fluorescence microscopy for high-contrast deep tissue imaging, including multiphoton activation and imaging (MPAI) harnessing novel photo-activatable fluorophores, stimulated emission reduced fluorescence (SERF) microscopy by adding a weak laser beam for stimulated emission, and two-photon induced focal saturation imaging with preferential depletion of ground-state fluorophores at focus. The resulting image contrasts all exhibit a higher-order (third- or fourth- order) nonlinear signal dependence on laser intensity than that in the standard TPFM. Both the physical principles and the imaging demonstrations will be provided for each super-nonlinear microscopy. In all these techniques, the created super-nonlinearity significantly enhances the imaging contrast and concurrently extends the imaging depth-limit of TPFM. Conceptually different from conventional multiphoton processes mediated by virtual states, our strategy constitutes a new class of fluorescence microscopy where high-order nonlinearity is mediated by real population transfer.

  4. [Advantages and Application Prospects of Deep Learning in Image Recognition and Bone Age Assessment].

    Science.gov (United States)

    Hu, T H; Wan, L; Liu, T A; Wang, M W; Chen, T; Wang, Y H

    2017-12-01

    Deep learning and neural network models have been new research directions and hot issues in the fields of machine learning and artificial intelligence in recent years. Deep learning has made a breakthrough in the applications of image and speech recognitions, and also has been extensively used in the fields of face recognition and information retrieval because of its special superiority. Bone X-ray images express different variations in black-white-gray gradations, which have image features of black and white contrasts and level differences. Based on these advantages of deep learning in image recognition, we combine it with the research of bone age assessment to provide basic datum for constructing a forensic automatic system of bone age assessment. This paper reviews the basic concept and network architectures of deep learning, and describes its recent research progress on image recognition in different research fields at home and abroad, and explores its advantages and application prospects in bone age assessment. Copyright© by the Editorial Department of Journal of Forensic Medicine.

  5. Joint Segmentation of Multiple Thoracic Organs in CT Images with Two Collaborative Deep Architectures.

    Science.gov (United States)

    Trullo, Roger; Petitjean, Caroline; Nie, Dong; Shen, Dinggang; Ruan, Su

    2017-09-01

    Computed Tomography (CT) is the standard imaging technique for radiotherapy planning. The delineation of Organs at Risk (OAR) in thoracic CT images is a necessary step before radiotherapy, for preventing irradiation of healthy organs. However, due to low contrast, multi-organ segmentation is a challenge. In this paper, we focus on developing a novel framework for automatic delineation of OARs. Different from previous works in OAR segmentation where each organ is segmented separately, we propose two collaborative deep architectures to jointly segment all organs, including esophagus, heart, aorta and trachea. Since most of the organ borders are ill-defined, we believe spatial relationships must be taken into account to overcome the lack of contrast. The aim of combining two networks is to learn anatomical constraints with the first network, which will be used in the second network, when each OAR is segmented in turn. Specifically, we use the first deep architecture, a deep SharpMask architecture, for providing an effective combination of low-level representations with deep high-level features, and then take into account the spatial relationships between organs by the use of Conditional Random Fields (CRF). Next, the second deep architecture is employed to refine the segmentation of each organ by using the maps obtained on the first deep architecture to learn anatomical constraints for guiding and refining the segmentations. Experimental results show superior performance on 30 CT scans, comparing with other state-of-the-art methods.

  6. Deep learning for objective quality assessment of 3D images

    NARCIS (Netherlands)

    Mocanu, D.C.; Exarchakos, G.; Liotta, A.

    2014-01-01

    Improving the users' Quality of Experience (QoE) in modern 3D Multimedia Systems is a challenging proposition, mainly due to our limited knowledge of 3D image Quality Assessment algorithms. While subjective QoE methods would better reflect the nature of human perception, these are not suitable in

  7. Deep-tissue reporter-gene imaging with fluorescence and optoacoustic tomography: a performance overview.

    Science.gov (United States)

    Deliolanis, Nikolaos C; Ale, Angelique; Morscher, Stefan; Burton, Neal C; Schaefer, Karin; Radrich, Karin; Razansky, Daniel; Ntziachristos, Vasilis

    2014-10-01

    A primary enabling feature of near-infrared fluorescent proteins (FPs) and fluorescent probes is the ability to visualize deeper in tissues than in the visible. The purpose of this work is to find which is the optimal visualization method that can exploit the advantages of this novel class of FPs in full-scale pre-clinical molecular imaging studies. Nude mice were stereotactically implanted with near-infrared FP expressing glioma cells to from brain tumors. The feasibility and performance metrics of FPs were compared between planar epi-illumination and trans-illumination fluorescence imaging, as well as to hybrid Fluorescence Molecular Tomography (FMT) system combined with X-ray CT and Multispectral Optoacoustic (or Photoacoustic) Tomography (MSOT). It is shown that deep-seated glioma brain tumors are possible to visualize both with fluorescence and optoacoustic imaging. Fluorescence imaging is straightforward and has good sensitivity; however, it lacks resolution. FMT-XCT can provide an improved rough resolution of ∼1 mm in deep tissue, while MSOT achieves 0.1 mm resolution in deep tissue and has comparable sensitivity. We show imaging capacity that can shift the visualization paradigm in biological discovery. The results are relevant not only to reporter gene imaging, but stand as cross-platform comparison for all methods imaging near infrared fluorescent contrast agents.

  8. Deep Learning Automates the Quantitative Analysis of Individual Cells in Live-Cell Imaging Experiments.

    Science.gov (United States)

    Van Valen, David A; Kudo, Takamasa; Lane, Keara M; Macklin, Derek N; Quach, Nicolas T; DeFelice, Mialy M; Maayan, Inbal; Tanouchi, Yu; Ashley, Euan A; Covert, Markus W

    2016-11-01

    Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.

  9. Classification of radiolarian images with hand-crafted and deep features

    Science.gov (United States)

    Keçeli, Ali Seydi; Kaya, Aydın; Keçeli, Seda Uzunçimen

    2017-12-01

    Radiolarians are planktonic protozoa and are important biostratigraphic and paleoenvironmental indicators for paleogeographic reconstructions. Radiolarian paleontology still remains as a low cost and the one of the most convenient way to obtain dating of deep ocean sediments. Traditional methods for identifying radiolarians are time-consuming and cannot scale to the granularity or scope necessary for large-scale studies. Automated image classification will allow making these analyses promptly. In this study, a method for automatic radiolarian image classification is proposed on Scanning Electron Microscope (SEM) images of radiolarians to ease species identification of fossilized radiolarians. The proposed method uses both hand-crafted features like invariant moments, wavelet moments, Gabor features, basic morphological features and deep features obtained from a pre-trained Convolutional Neural Network (CNN). Feature selection is applied over deep features to reduce high dimensionality. Classification outcomes are analyzed to compare hand-crafted features, deep features, and their combinations. Results show that the deep features obtained from a pre-trained CNN are more discriminative comparing to hand-crafted ones. Additionally, feature selection utilizes to the computational cost of classification algorithms and have no negative effect on classification accuracy.

  10. Pitfalls of CT for deep neck abscess imaging assessment: a retrospective review of 162 cases.

    Science.gov (United States)

    Chuang, S Y; Lin, H T; Wen, Y S; Hsu, F J

    2013-01-01

    To investigate the diagnostic value of contrast-enhanced computed tomography (CT) for the prediction of deep neck abscesses in different deep neck spaces and to evaluate the false-positive results. We retrospectively analysed the clinical charts, CT examinations, surgical findings, bacteriology, pathological examinations and complications of hospitalised patients with a diagnosis of deep neck abscess from 2004 to 2010. The positive predictive values (PPV) for the prediction of abscesses by CT scan in different deep neck spaces were calculated individually on the basis of surgical findings. A total of 162 patients were included in this study. All patients received both intravenous antibiotics and surgical drainage. The parapharyngeal space was the most commonly involved space. The overall PPV for the prediction of deep neck abscess with contrast-enhanced CT was 79.6%. The PPV was 91.3% when more than one deep neck space was involved but only 50.0% in patients with isolated retropharyngeal abscesses. In the false-positive group, cellulitis was the most common final result, followed by cystic degeneration of cervical metastases. Five specimens taken intra-operatively revealed malignancy and four of these were not infected. There are some limitations affecting the differentiation of abscesses and cellulitis, particularly in the retropharyngeal space. A central necrotic cervical metastatic lymph node may sometimes also mimic a simple pyogenic deep neck abscess on both clinical pictures and CT images. Routine biopsy of the tissue must be performed during surgical drainage.

  11. Deep learning methods for CT image-domain metal artifact reduction

    Science.gov (United States)

    Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Shan, Hongming; Claus, Bernhard; Jin, Yannan; De Man, Bruno; Wang, Ge

    2017-09-01

    Artifacts resulting from metal objects have been a persistent problem in CT images over the last four decades. A common approach to overcome their effects is to replace corrupt projection data with values synthesized from an interpolation scheme or by reprojection of a prior image. State-of-the-art correction methods, such as the interpolation- and normalization-based algorithm NMAR, often do not produce clinically satisfactory results. Residual image artifacts remain in challenging cases and even new artifacts can be introduced by the interpolation scheme. Metal artifacts continue to be a major impediment, particularly in radiation and proton therapy planning as well as orthopedic imaging. A new solution to the long-standing metal artifact reduction (MAR) problem is deep learning, which has been successfully applied to medical image processing and analysis tasks. In this study, we combine a convolutional neural network (CNN) with the state-of-the-art NMAR algorithm to reduce metal streaks in critical image regions. Training data was synthesized from CT simulation scans of a phantom derived from real patient images. The CNN is able to map metal-corrupted images to artifact-free monoenergetic images to achieve additional correction on top of NMAR for improved image quality. Our results indicate that deep learning is a novel tool to address CT reconstruction challenges, and may enable more accurate tumor volume estimation for radiation therapy planning.

  12. Lesion Detection in CT Images Using Deep Learning Semantic Segmentation Technique

    Science.gov (United States)

    Kalinovsky, A.; Liauchuk, V.; Tarasau, A.

    2017-05-01

    In this paper, the problem of automatic detection of tuberculosis lesion on 3D lung CT images is considered as a benchmark for testing out algorithms based on a modern concept of Deep Learning. For training and testing of the algorithms a domestic dataset of 338 3D CT scans of tuberculosis patients with manually labelled lesions was used. The algorithms which are based on using Deep Convolutional Networks were implemented and applied in three different ways including slice-wise lesion detection in 2D images using semantic segmentation, slice-wise lesion detection in 2D images using sliding window technique as well as straightforward detection of lesions via semantic segmentation in whole 3D CT scans. The algorithms demonstrate superior performance compared to algorithms based on conventional image analysis methods.

  13. Classification of time-series images using deep convolutional neural networks

    Science.gov (United States)

    Hatami, Nima; Gavet, Yann; Debayle, Johan

    2018-04-01

    Convolutional Neural Networks (CNN) has achieved a great success in image recognition task by automatically learning a hierarchical feature representation from raw data. While the majority of Time-Series Classification (TSC) literature is focused on 1D signals, this paper uses Recurrence Plots (RP) to transform time-series into 2D texture images and then take advantage of the deep CNN classifier. Image representation of time-series introduces different feature types that are not available for 1D signals, and therefore TSC can be treated as texture image recognition task. CNN model also allows learning different levels of representations together with a classifier, jointly and automatically. Therefore, using RP and CNN in a unified framework is expected to boost the recognition rate of TSC. Experimental results on the UCR time-series classification archive demonstrate competitive accuracy of the proposed approach, compared not only to the existing deep architectures, but also to the state-of-the art TSC algorithms.

  14. Deep Convolutional Neural Network-Based Early Automated Detection of Diabetic Retinopathy Using Fundus Image.

    Science.gov (United States)

    Xu, Kele; Feng, Dawei; Mi, Haibo

    2017-11-23

    The automatic detection of diabetic retinopathy is of vital importance, as it is the main cause of irreversible vision loss in the working-age population in the developed world. The early detection of diabetic retinopathy occurrence can be very helpful for clinical treatment; although several different feature extraction approaches have been proposed, the classification task for retinal images is still tedious even for those trained clinicians. Recently, deep convolutional neural networks have manifested superior performance in image classification compared to previous handcrafted feature-based image classification methods. Thus, in this paper, we explored the use of deep convolutional neural network methodology for the automatic classification of diabetic retinopathy using color fundus image, and obtained an accuracy of 94.5% on our dataset, outperforming the results obtained by using classical approaches.

  15. A deep level set method for image segmentation

    OpenAIRE

    Tang, Min; Valipour, Sepehr; Zhang, Zichen Vincent; Cobzas, Dana; MartinJagersand

    2017-01-01

    This paper proposes a novel image segmentation approachthat integrates fully convolutional networks (FCNs) with a level setmodel. Compared with a FCN, the integrated method can incorporatesmoothing and prior information to achieve an accurate segmentation.Furthermore, different than using the level set model as a post-processingtool, we integrate it into the training phase to fine-tune the FCN. Thisallows the use of unlabeled data during training in a semi-supervisedsetting. Using two types o...

  16. Deep multi-scale convolutional neural network for hyperspectral image classification

    Science.gov (United States)

    Zhang, Feng-zhe; Yang, Xia

    2018-04-01

    In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.

  17. [Microsurgery assisted by intraoperative magnetic resonance imaging and neuronavigation for small lesions in deep brain].

    Science.gov (United States)

    Song, Zhi-jun; Chen, Xiao-lei; Xu, Bai-nan; Sun, Zheng-hui; Sun, Guo-chen; Zhao, Yan; Wang, Fei; Wang, Yu-bo; Zhou, Ding-biao

    2012-01-03

    To explore the practicability of resecting small lesions in deep brain by intraoperative magnetic resonance imaging (iMRI) and neuronavigator-assisted microsurgery and its clinical efficacies. A total of 42 cases with small lesions in deep brain underwent intraoperative MRI and neuronavigator-assisted microsurgery. The drifting of neuronavigation was corrected by images acquired from intraoperative MR rescanning. All lesions were successfully identified and 40 cases totally removed without mortality. Only 3 cases developed new neurological deficits post-operatively while 2 of them returned to normal neurological functions after a follow-up duration of 3 months to 2 years. The application of intraoperative MRI can effectively correct the drifting of neuronavigation and enhance the accuracy of microsurgical neuronavigation for small lesions in deep brain.

  18. Computer-aided classification of lung nodules on computed tomography images via deep learning technique

    Directory of Open Access Journals (Sweden)

    Hua KL

    2015-08-01

    Full Text Available Kai-Lung Hua,1 Che-Hao Hsu,1 Shintami Chusnul Hidayati,1 Wen-Huang Cheng,2 Yu-Jen Chen3 1Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, 2Research Center for Information Technology Innovation, Academia Sinica, 3Department of Radiation Oncology, MacKay Memorial Hospital, Taipei, Taiwan Abstract: Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain. Keywords: nodule classification, deep learning, deep belief network, convolutional neural network

  19. Deep machine learning provides state-of-the-art performance in image-based plant phenotyping.

    Science.gov (United States)

    Pound, Michael P; Atkinson, Jonathan A; Townsend, Alexandra J; Wilson, Michael H; Griffiths, Marcus; Jackson, Aaron S; Bulat, Adrian; Tzimiropoulos, Georgios; Wells, Darren M; Murchie, Erik H; Pridmore, Tony P; French, Andrew P

    2017-10-01

    In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets. © The Authors 2017. Published by Oxford University Press.

  20. Topography characterization of a deep grating using near-field imaging

    DEFF Research Database (Denmark)

    Gregersen, Niels; Tromborg, Bjarne; Volkov, Valentyn S.

    2006-01-01

    Using near-field optical microscopy at the wavelength of 633 nm, we image light intensity distributions at several distances above an ~2-mm deep and a 1-mm-period glass grating illuminated from below under the condition of total internal reflection. The intensity distributions are numerically mod...

  1. The Image of the Negro in Deep South Public School State History Texts.

    Science.gov (United States)

    McLaurin, Melton

    This report reviews the image portrayed of the Negro, in textbooks used in the deep South. Slavery is painted as a cordial, humane system under kindly masters and the Negro as docile and childlike. Although the treatment of the modern era is relatively more objective, the texts, on the whole, evade treatment of the Civil Rights struggle, violence,…

  2. Comparative study of deep learning methods for one-shot image classification (abstract)

    NARCIS (Netherlands)

    van den Bogaert, J.; Mohseni, H.; Khodier, M.; Stoyanov, Y.; Mocanu, D.C.; Menkovski, V.

    2017-01-01

    Training deep learning models for images classification requires large amount of labeled data to overcome the challenges of overfitting and underfitting. Usually, in many practical applications, these labeled data are not available. In an attempt to solve this problem, the one-shot learning paradigm

  3. Single myelin fiber imaging in living rodents without labeling by deep optical coherence microscopy

    Science.gov (United States)

    Ben Arous, Juliette; Binding, Jonas; Léger, Jean-François; Casado, Mariano; Topilko, Piotr; Gigan, Sylvain; Claude Boccara, A.; Bourdieu, Laurent

    2011-11-01

    Myelin sheath disruption is responsible for multiple neuropathies in the central and peripheral nervous system. Myelin imaging has thus become an important diagnosis tool. However, in vivo imaging has been limited to either low-resolution techniques unable to resolve individual fibers or to low-penetration imaging of single fibers, which cannot provide quantitative information about large volumes of tissue, as required for diagnostic purposes. Here, we perform myelin imaging without labeling and at micron-scale resolution with >300-μm penetration depth on living rodents. This was achieved with a prototype [termed deep optical coherence microscopy (deep-OCM)] of a high-numerical aperture infrared full-field optical coherence microscope, which includes aberration correction for the compensation of refractive index mismatch and high-frame-rate interferometric measurements. We were able to measure the density of individual myelinated fibers in the rat cortex over a large volume of gray matter. In the peripheral nervous system, deep-OCM allows, after minor surgery, in situ imaging of single myelinated fibers over a large fraction of the sciatic nerve. This allows quantitative comparison of normal and Krox20 mutant mice, in which myelination in the peripheral nervous system is impaired. This opens promising perspectives for myelin chronic imaging in demyelinating diseases and for minimally invasive medical diagnosis.

  4. A Deep Learning Approach to Digitally Stain Optical Coherence Tomography Images of the Optic Nerve Head.

    Science.gov (United States)

    Devalla, Sripad Krishna; Chin, Khai Sing; Mari, Jean-Martial; Tun, Tin A; Strouthidis, Nicholas G; Aung, Tin; Thiéry, Alexandre H; Girard, Michaël J A

    2018-01-01

    To develop a deep learning approach to digitally stain optical coherence tomography (OCT) images of the optic nerve head (ONH). A horizontal B-scan was acquired through the center of the ONH using OCT (Spectralis) for one eye of each of 100 subjects (40 healthy and 60 glaucoma). All images were enhanced using adaptive compensation. A custom deep learning network was then designed and trained with the compensated images to digitally stain (i.e., highlight) six tissue layers of the ONH. The accuracy of our algorithm was assessed (against manual segmentations) using the dice coefficient, sensitivity, specificity, intersection over union (IU), and accuracy. We studied the effect of compensation, number of training images, and performance comparison between glaucoma and healthy subjects. For images it had not yet assessed, our algorithm was able to digitally stain the retinal nerve fiber layer + prelamina, the RPE, all other retinal layers, the choroid, and the peripapillary sclera and lamina cribrosa. For all tissues, the dice coefficient, sensitivity, specificity, IU, and accuracy (mean) were 0.84 ± 0.03, 0.92 ± 0.03, 0.99 ± 0.00, 0.89 ± 0.03, and 0.94 ± 0.02, respectively. Our algorithm performed significantly better when compensated images were used for training (P deep learning algorithm can simultaneously stain the neural and connective tissues of the ONH, offering a framework to automatically measure multiple key structural parameters of the ONH that may be critical to improve glaucoma management.

  5. Convolutional deep belief network with feature encoding for classification of neuroblastoma histological images

    Directory of Open Access Journals (Sweden)

    Soheila Gheisari

    2018-01-01

    Full Text Available Background: Neuroblastoma is the most common extracranial solid tumor in children younger than 5 years old. Optimal management of neuroblastic tumors depends on many factors including histopathological classification. The gold standard for classification of neuroblastoma histological images is visual microscopic assessment. In this study, we propose and evaluate a deep learning approach to classify high-resolution digital images of neuroblastoma histology into five different classes determined by the Shimada classification. Subjects and Methods: We apply a combination of convolutional deep belief network (CDBN with feature encoding algorithm that automatically classifies digital images of neuroblastoma histology into five different classes. We design a three-layer CDBN to extract high-level features from neuroblastoma histological images and combine with a feature encoding model to extract features that are highly discriminative in the classification task. The extracted features are classified into five different classes using a support vector machine classifier. Data: We constructed a dataset of 1043 neuroblastoma histological images derived from Aperio scanner from 125 patients representing different classes of neuroblastoma tumors. Results: The weighted average F-measure of 86.01% was obtained from the selected high-level features, outperforming state-of-the-art methods. Conclusion: The proposed computer-aided classification system, which uses the combination of deep architecture and feature encoding to learn high-level features, is highly effective in the classification of neuroblastoma histological images.

  6. Single myelin fiber imaging in living rodents without labeling by deep optical coherence microscopy.

    Science.gov (United States)

    Ben Arous, Juliette; Binding, Jonas; Léger, Jean-François; Casado, Mariano; Topilko, Piotr; Gigan, Sylvain; Boccara, A Claude; Bourdieu, Laurent

    2011-11-01

    Myelin sheath disruption is responsible for multiple neuropathies in the central and peripheral nervous system. Myelin imaging has thus become an important diagnosis tool. However, in vivo imaging has been limited to either low-resolution techniques unable to resolve individual fibers or to low-penetration imaging of single fibers, which cannot provide quantitative information about large volumes of tissue, as required for diagnostic purposes. Here, we perform myelin imaging without labeling and at micron-scale resolution with >300-μm penetration depth on living rodents. This was achieved with a prototype [termed deep optical coherence microscopy (deep-OCM)] of a high-numerical aperture infrared full-field optical coherence microscope, which includes aberration correction for the compensation of refractive index mismatch and high-frame-rate interferometric measurements. We were able to measure the density of individual myelinated fibers in the rat cortex over a large volume of gray matter. In the peripheral nervous system, deep-OCM allows, after minor surgery, in situ imaging of single myelinated fibers over a large fraction of the sciatic nerve. This allows quantitative comparison of normal and Krox20 mutant mice, in which myelination in the peripheral nervous system is impaired. This opens promising perspectives for myelin chronic imaging in demyelinating diseases and for minimally invasive medical diagnosis.

  7. Electrical imaging of deep crustal features of Kutch, India

    Science.gov (United States)

    Sastry, R. S.; Nagarajan, Nandini; Sarma, S. V. S.

    2008-03-01

    A regional Magnetotelluric (MT) study, was carried out with 55 MT soundings, distributed along five traverses, across the Kutch Mainland Unit (KMU), on the west coast of India, a region characterized by a series of successive uplifts and intervening depressions in the form of half graben, bounded by master faults. We obtain the deeper electrical structure of the crust beneath Kutch, from 2-D modelling of MT data along the 5 traverses, in order to evaluate the geo-electrical signatures, if any, of the known primary tectonic structures in this region. The results show that the deeper electrical structure in the Kutch region presents a mosaic of high resistive crustal blocks separated by deep-rooted conductive features. Two such crustal conductive features spatially correlate with the known tectonic features, viz., the Kutch Mainland Fault (KMF), and the Katrol Hill Fault (KHF). An impressive feature of the geo-electrical sections is an additional, well-defined conductive feature, running between Jakhau and Mundra, located at the southern end of each of the five MT traverses and interpreted to be the electrical signature of yet another hidden fault at the southern margin of the KMU. This new feature is named as Jakhau-Mundra Fault (JMF). It is inferred that the presence of JMF together with the Kathiawar Fault (NKF), further south, located at the northern boundary of the Saurashtra Horst, would enhance the possibility of occurrence of a thick sedimentary column in the Gulf of Kutch. The region between the newly delineated fault (JMF) and the Kathiawar fault (NKF) could thus be significant for Hydrocarbon Exploration.

  8. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    Science.gov (United States)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  9. Deep Multimodal Distance Metric Learning Using Click Constraints for Image Ranking.

    Science.gov (United States)

    Yu, Jun; Yang, Xiaokang; Gao, Fei; Tao, Dacheng

    2017-12-01

    How do we retrieve images accurately? Also, how do we rank a group of images precisely and efficiently for specific queries? These problems are critical for researchers and engineers to generate a novel image searching engine. First, it is important to obtain an appropriate description that effectively represent the images. In this paper, multimodal features are considered for describing images. The images unique properties are reflected by visual features, which are correlated to each other. However, semantic gaps always exist between images visual features and semantics. Therefore, we utilize click feature to reduce the semantic gap. The second key issue is learning an appropriate distance metric to combine these multimodal features. This paper develops a novel deep multimodal distance metric learning (Deep-MDML) method. A structured ranking model is adopted to utilize both visual and click features in distance metric learning (DML). Specifically, images and their related ranking results are first collected to form the training set. Multimodal features, including click and visual features, are collected with these images. Next, a group of autoencoders is applied to obtain initially a distance metric in different visual spaces, and an MDML method is used to assign optimal weights for different modalities. Next, we conduct alternating optimization to train the ranking model, which is used for the ranking of new queries with click features. Compared with existing image ranking methods, the proposed method adopts a new ranking model to use multimodal features, including click features and visual features in DML. We operated experiments to analyze the proposed Deep-MDML in two benchmark data sets, and the results validate the effects of the method.

  10. Neuronal pathology in deep grey matter structures: a multimodal imaging analysis combining PET and MRI

    Energy Technology Data Exchange (ETDEWEB)

    Bosque-Freeman, L.; Leroy, C.; Galanaud, D.; Sureau, F.; Assouad, R.; Tourbah, A.; Papeix, C.; Comtat, C.; Trebossen, R.; Lubetzki, C.; Delforge, J.; Bottlaender, M.; Stankoff, B. [Serv. Hosp. Frederic Joliot, Orsay (France)

    2009-07-01

    Objective: To assess neuronal damage in deep gray matter structures by positron emission tomography (PET) using [{sup 11}C]-flumazenil (FMZ), a specific central benzodiazepine receptor antagonist, and [{sup 18}F]-fluorodeoxyglucose (FDG), which reflects neuronal metabolism. To compare results obtained by PET and those with multimodal magnetic resonance imaging (MRI). Background: It is now accepted that neuronal injury plays a crucial role in the occurrence and progression of neurological disability in multiple sclerosis (MS). To date, available MRI techniques do not specifically assess neuronal damage, but early abnormalities, such as iron deposition or atrophy, have been described in deep gray matter structures. Whether those MRI modifications correspond to neuronal damage remains to be further investigated. Materials and methods: Nine healthy volunteers were compared to 10 progressive and 9 relapsing remitting (RR) MS patients. Each subject performed two PET examinations with [{sup 11}C]-FMZ and [{sup 18}F]-FDG, on a high resolution research tomograph dedicated to brain imaging (Siemens Medical Solution, spatial resolution of 2.5 mm). Deep gray matter regions were manually segmented on T1-weighted MR images with the mutual information algorithm (www.brainvisa.info), and co-registered with PET images. A multimodal MRI including T1 pre and post gadolinium, T2-proton density sequences, magnetization transfer, diffusion tensor, and protonic spectroscopy was also performed for each subject. Results: On PET with [{sup 11}C]-FMZ, there was a pronounced decrease in receptor density for RR patients in all deep gray matter structures investigated, whereas the density was unchanged or even increased in the same regions for progressive patients. Whether the different patterns between RR and progressive patients reflect distinct pathogenic mechanisms is currently investigated by comparing PET and multimodal MRI results. Conclusion: Combination of PET and multimodal MR imaging

  11. Atomic force microscopy deep trench and sidewall imaging with an optical fiber probe

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Hui, E-mail: xiehui@hit.edu.cn; Hussain, Danish; Yang, Feng [The State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, 2 Yikuang, 150080 Harbin (China); Sun, Lining [The State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, 2 Yikuang, 150080 Harbin (China); Robotics and Microsystems Center, Soochow University, 215021 Suzhou (China)

    2014-12-15

    We report a method to measure critical dimensions of micro- and nanostructures using the atomic force microscope (AFM) with an optical fiber probe (OFP). This method is capable of scanning narrow and deep trenches due to the long and thin OFP tip, as well as imaging of steep sidewalls with unique profiling possibilities by laterally tilting the OFP without any modifications of the optical lever. A switch control scheme is developed to measure the sidewall angle by flexibly transferring feedback control between the Z- and Y-axis, for a serial scan of the horizontal surface (raster scan on XY-plane) and sidewall (raster scan on the YZ-plane), respectively. In experiments, a deep trench with tapered walls (243.5 μm deep) and a microhole (about 14.9 μm deep) have been imaged with the orthogonally aligned OFP, as well as a silicon sidewall (fabricated by deep reactive ion etching) has been characterized with the tilted OFP. Moreover, the sidewall angle of TGZ3 (AFM calibration grating) was accurately measured using the switchable scan method.

  12. Deep learning based classification for head and neck cancer detection with hyperspectral imaging in an animal model

    Science.gov (United States)

    Ma, Ling; Lu, Guolan; Wang, Dongsheng; Wang, Xu; Chen, Zhuo Georgia; Muller, Susan; Chen, Amy; Fei, Baowei

    2017-03-01

    Hyperspectral imaging (HSI) is an emerging imaging modality that can provide a noninvasive tool for cancer detection and image-guided surgery. HSI acquires high-resolution images at hundreds of spectral bands, providing big data to differentiating different types of tissue. We proposed a deep learning based method for the detection of head and neck cancer with hyperspectral images. Since the deep learning algorithm can learn the feature hierarchically, the learned features are more discriminative and concise than the handcrafted features. In this study, we adopt convolutional neural networks (CNN) to learn the deep feature of pixels for classifying each pixel into tumor or normal tissue. We evaluated our proposed classification method on the dataset containing hyperspectral images from 12 tumor-bearing mice. Experimental results show that our method achieved an average accuracy of 91.36%. The preliminary study demonstrated that our deep learning method can be applied to hyperspectral images for detecting head and neck tumors in animal models.

  13. Technical Note: Deep learning based MRAC using rapid ultra-short echo time imaging.

    Science.gov (United States)

    Jang, Hyungseok; Liu, Fang; Zhao, Gengyan; Bradshaw, Tyler; McMillan, Alan B

    2018-05-15

    In this study, we explore the feasibility of a novel framework for MR-based attenuation correction for PET/MR imaging based on deep learning via convolutional neural networks, which enables fully automated and robust estimation of a pseudo CT image based on ultrashort echo time (UTE), fat, and water images obtained by a rapid MR acquisition. MR images for MRAC are acquired using dual echo ramped hybrid encoding (dRHE), where both UTE and out-of-phase echo images are obtained within a short single acquisition (35 sec). Tissue labeling of air, soft tissue, and bone in the UTE image is accomplished via a deep learning network that was pre-trained with T1-weighted MR images. UTE images are used as input to the network, which was trained using labels derived from co-registered CT images. The tissue labels estimated by deep learning are refined by a conditional random field based correction. The soft tissue labels are further separated into fat and water components using the two-point Dixon method. The estimated bone, air, fat, and water images are then assigned appropriate Hounsfield units, resulting in a pseudo CT image for PET attenuation correction. To evaluate the proposed MRAC method, PET/MR imaging of the head was performed on 8 human subjects, where Dice similarity coefficients of the estimated tissue labels and relative PET errors were evaluated through comparison to a registered CT image. Dice coefficients for air (within the head), soft tissue, and bone labels were 0.76±0.03, 0.96±0.006, and 0.88±0.01. In PET quantification, the proposed MRAC method produced relative PET errors less than 1% within most brain regions. The proposed MRAC method utilizing deep learning with transfer learning and an efficient dRHE acquisition enables reliable PET quantification with accurate and rapid pseudo CT generation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  14. Cell segmentation in histopathological images with deep learning algorithms by utilizing spatial relationships.

    Science.gov (United States)

    Hatipoglu, Nuh; Bilgin, Gokhan

    2017-10-01

    In many computerized methods for cell detection, segmentation, and classification in digital histopathology that have recently emerged, the task of cell segmentation remains a chief problem for image processing in designing computer-aided diagnosis (CAD) systems. In research and diagnostic studies on cancer, pathologists can use CAD systems as second readers to analyze high-resolution histopathological images. Since cell detection and segmentation are critical for cancer grade assessments, cellular and extracellular structures should primarily be extracted from histopathological images. In response, we sought to identify a useful cell segmentation approach with histopathological images that uses not only prominent deep learning algorithms (i.e., convolutional neural networks, stacked autoencoders, and deep belief networks), but also spatial relationships, information of which is critical for achieving better cell segmentation results. To that end, we collected cellular and extracellular samples from histopathological images by windowing in small patches with various sizes. In experiments, the segmentation accuracies of the methods used improved as the window sizes increased due to the addition of local spatial and contextual information. Once we compared the effects of training sample size and influence of window size, results revealed that the deep learning algorithms, especially convolutional neural networks and partly stacked autoencoders, performed better than conventional methods in cell segmentation.

  15. Boosted Jet Tagging with Jet-Images and Deep Neural Networks

    International Nuclear Information System (INIS)

    Kagan, Michael; Oliveira, Luke de; Mackey, Lester; Nachman, Benjamin; Schwartzman, Ariel

    2016-01-01

    Building on the jet-image based representation of high energy jets, we develop computer vision based techniques for jet tagging through the use of deep neural networks. Jet-images enabled the connection between jet substructure and tagging with the fields of computer vision and image processing. We show how applying such techniques using deep neural networks can improve the performance to identify highly boosted W bosons with respect to state-of-the-art substructure methods. In addition, we explore new ways to extract and visualize the discriminating features of different classes of jets, adding a new capability to understand the physics within jets and to design more powerful jet tagging methods

  16. Deep convolutional neural networks for building extraction from orthoimages and dense image matching point clouds

    Science.gov (United States)

    Maltezos, Evangelos; Doulamis, Nikolaos; Doulamis, Anastasios; Ioannidis, Charalabos

    2017-10-01

    Automatic extraction of buildings from remote sensing data is an attractive research topic, useful for several applications, such as cadastre and urban planning. This is mainly due to the inherent artifacts of the used data and the differences in viewpoint, surrounding environment, and complex shape and size of the buildings. This paper introduces an efficient deep learning framework based on convolutional neural networks (CNNs) toward building extraction from orthoimages. In contrast to conventional deep approaches in which the raw image data are fed as input to the deep neural network, in this paper the height information is exploited as an additional feature being derived from the application of a dense image matching algorithm. As test sites, several complex urban regions of various types of buildings, pixel resolutions and types of data are used, located in Vaihingen in Germany and in Perissa in Greece. Our method is evaluated using the rates of completeness, correctness, and quality and compared with conventional and other "shallow" learning paradigms such as support vector machines. Experimental results indicate that a combination of raw image data with height information, feeding as input to a deep CNN model, provides potentials in building detection in terms of robustness, flexibility, and efficiency.

  17. Making beautiful deep-sky images astrophotography with affordable equipment and software

    CERN Document Server

    Parker, Greg

    2017-01-01

    In this updated version of his classic on deep-sky imaging, astrophotographer Greg Parker describes how the latest technology can help amateur astronomers process their own beautiful images. Whether you are taking your own images from a backyard system or processing data from space telescopes, this book shows you how to enhance the visuals in the "electronic darkroom" for maximum beauty and impact. The wealth of options in the astrophotography realm has exploded in the recent past, and Parker proves an able guide for the interested imager to improve his or her comfort level against this exciting new technological backdrop. From addressing the latest DSLR equipment to updating the usage of Hyperstar imaging telescopes and explaining the utility of parallel imaging arrays, this edition brings the book fully up-to-date, and includes clear tutorials, helpful references, and gorgeous color astrophotography by one of the experts in the field.

  18. Representation learning with deep extreme learning machines for efficient image set classification

    KAUST Repository

    Uzair, Muhammad

    2016-12-09

    Efficient and accurate representation of a collection of images, that belong to the same class, is a major research challenge for practical image set classification. Existing methods either make prior assumptions about the data structure, or perform heavy computations to learn structure from the data itself. In this paper, we propose an efficient image set representation that does not make any prior assumptions about the structure of the underlying data. We learn the nonlinear structure of image sets with deep extreme learning machines that are very efficient and generalize well even on a limited number of training samples. Extensive experiments on a broad range of public datasets for image set classification show that the proposed algorithm consistently outperforms state-of-the-art image set classification methods both in terms of speed and accuracy.

  19. Representation learning with deep extreme learning machines for efficient image set classification

    KAUST Repository

    Uzair, Muhammad; Shafait, Faisal; Ghanem, Bernard; Mian, Ajmal

    2016-01-01

    Efficient and accurate representation of a collection of images, that belong to the same class, is a major research challenge for practical image set classification. Existing methods either make prior assumptions about the data structure, or perform heavy computations to learn structure from the data itself. In this paper, we propose an efficient image set representation that does not make any prior assumptions about the structure of the underlying data. We learn the nonlinear structure of image sets with deep extreme learning machines that are very efficient and generalize well even on a limited number of training samples. Extensive experiments on a broad range of public datasets for image set classification show that the proposed algorithm consistently outperforms state-of-the-art image set classification methods both in terms of speed and accuracy.

  20. Big-deep-smart data in imaging for guiding materials design

    Science.gov (United States)

    Kalinin, Sergei V.; Sumpter, Bobby G.; Archibald, Richard K.

    2015-10-01

    Harnessing big data, deep data, and smart data from state-of-the-art imaging might accelerate the design and realization of advanced functional materials. Here we discuss new opportunities in materials design enabled by the availability of big data in imaging and data analytics approaches, including their limitations, in material systems of practical interest. We specifically focus on how these tools might help realize new discoveries in a timely manner. Such methodologies are particularly appropriate to explore in light of continued improvements in atomistic imaging, modelling and data analytics methods.

  1. Fine-grained leukocyte classification with deep residual learning for microscopic images.

    Science.gov (United States)

    Qin, Feiwei; Gao, Nannan; Peng, Yong; Wu, Zizhao; Shen, Shuying; Grudtsin, Artur

    2018-08-01

    Leukocyte classification and cytometry have wide applications in medical domain, previous researches usually exploit machine learning techniques to classify leukocytes automatically. However, constrained by the past development of machine learning techniques, for example, extracting distinctive features from raw microscopic images are difficult, the widely used SVM classifier only has relative few parameters to tune, these methods cannot efficiently handle fine-grained classification cases when the white blood cells have up to 40 categories. Based on deep learning theory, a systematic study is conducted on finer leukocyte classification in this paper. A deep residual neural network based leukocyte classifier is constructed at first, which can imitate the domain expert's cell recognition process, and extract salient features robustly and automatically. Then the deep neural network classifier's topology is adjusted according to the prior knowledge of white blood cell test. After that the microscopic image dataset with almost one hundred thousand labeled leukocytes belonging to 40 categories is built, and combined training strategies are adopted to make the designed classifier has good generalization ability. The proposed deep residual neural network based classifier was tested on microscopic image dataset with 40 leukocyte categories. It achieves top-1 accuracy of 77.80%, top-5 accuracy of 98.75% during the training procedure. The average accuracy on the test set is nearly 76.84%. This paper presents a fine-grained leukocyte classification method for microscopic images, based on deep residual learning theory and medical domain knowledge. Experimental results validate the feasibility and effectiveness of our approach. Extended experiments support that the fine-grained leukocyte classifier could be used in real medical applications, assist doctors in diagnosing diseases, reduce human power significantly. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Deep machine learning based Image classification in hard disk drive manufacturing (Conference Presentation)

    Science.gov (United States)

    Rana, Narender; Chien, Chester

    2018-03-01

    A key sensor element in a Hard Disk Drive (HDD) is the read-write head device. The device is complex 3D shape and its fabrication requires over thousand process steps with many of them being various types of image inspection and critical dimension (CD) metrology steps. In order to have high yield of devices across a wafer, very tight inspection and metrology specifications are implemented. Many images are collected on a wafer and inspected for various types of defects and in CD metrology the quality of image impacts the CD measurements. Metrology noise need to be minimized in CD metrology to get better estimate of the process related variations for implementing robust process controls. Though there are specialized tools available for defect inspection and review allowing classification and statistics. However, due to unavailability of such advanced tools or other reasons, many times images need to be manually inspected. SEM Image inspection and CD-SEM metrology tools are different tools differing in software as well. SEM Image inspection and CD-SEM metrology tools are separate tools differing in software and purpose. There have been cases where a significant numbers of CD-SEM images are blurred or have some artefact and there is a need for image inspection along with the CD measurement. Tool may not report a practical metric highlighting the quality of image. Not filtering CD from these blurred images will add metrology noise to the CD measurement. An image classifier can be helpful here for filtering such data. This paper presents the use of artificial intelligence in classifying the SEM images. Deep machine learning is used to train a neural network which is then used to classify the new images as blurred and not blurred. Figure 1 shows the image blur artefact and contingency table of classification results from the trained deep neural network. Prediction accuracy of 94.9 % was achieved in the first model. Paper covers other such applications of the deep neural

  3. On the kinematic separation of field and cluster stars across the bulge globular NGC 6528

    Energy Technology Data Exchange (ETDEWEB)

    Lagioia, E. P.; Bono, G.; Buonanno, R. [Dipartimento di Fisica, Università degli Studi di Roma-Tor Vergata, via della Ricerca Scientifica 1, I-00133 Roma (Italy); Milone, A. P. [Research School of Astronomy and Astrophysics, The Australian National University, Cotter Road, Weston, ACT 2611 (Australia); Stetson, P. B. [Dominion Astrophysical Observatory, Herzberg Institute of Astrophysics, National Research Council, 5071 West Saanich Road, Victoria, BC V9E 2E7 (Canada); Prada Moroni, P. G. [Dipartimento di Fisica, Università di Pisa, I-56127 Pisa (Italy); Dall' Ora, M. [INAF-Osservatorio Astronomico di Capodimonte, Salita Moiariello 16, I-80131 Napoli (Italy); Aparicio, A.; Monelli, M. [Instituto de Astrofìsica de Canarias, E-38200 La Laguna, Tenerife, Canary Islands (Spain); Calamida, A.; Ferraro, I.; Iannicola, G. [INAF-Osservatorio Astronomico di Roma, Via Frascati 33, I-00044 Monte Porzio Catone (Italy); Gilmozzi, R. [European Southern Observatory, Karl-Schwarzschild-Straße 2, D-85748 Garching (Germany); Matsunaga, N. [Kiso Observatory, Institute of Astronomy, School of Science, The University of Tokyo, 10762-30, Mitake, Kiso-machi, Kiso-gun, 3 Nagano 97-0101 (Japan); Walker, A., E-mail: eplagioia@roma2.infn.it [Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, Casilla 603, La Serena (Chile)

    2014-02-10

    We present deep and precise multi-band photometry of the Galactic bulge globular cluster NGC 6528. The current data set includes optical and near-infrared images collected with ACS/WFC, WFC3/UVIS, and WFC3/IR on board the Hubble Space Telescope. The images cover a time interval of almost 10 yr, and we have been able to carry out a proper-motion separation between cluster and field stars. We performed a detailed comparison in the m {sub F814W}, m {sub F606W} – m {sub F814W} color-magnitude diagram with two empirical calibrators observed in the same bands. We found that NGC 6528 is coeval with and more metal-rich than 47 Tuc. Moreover, it appears older and more metal-poor than the super-metal-rich open cluster NGC 6791. The current evidence is supported by several diagnostics (red horizontal branch, red giant branch bump, shape of the sub-giant branch, slope of the main sequence) that are minimally affected by uncertainties in reddening and distance. We fit the optical observations with theoretical isochrones based on a scaled-solar chemical mixture and found an age of 11 ± 1 Gyr and an iron abundance slightly above solar ([Fe/H] = +0.20). The iron abundance and the old cluster age further support the recent spectroscopic findings suggesting a rapid chemical enrichment of the Galactic bulge.

  4. Performance evaluation of 2D and 3D deep learning approaches for automatic segmentation of multiple organs on CT images

    Science.gov (United States)

    Zhou, Xiangrong; Yamada, Kazuma; Kojima, Takuya; Takayama, Ryosuke; Wang, Song; Zhou, Xinxin; Hara, Takeshi; Fujita, Hiroshi

    2018-02-01

    The purpose of this study is to evaluate and compare the performance of modern deep learning techniques for automatically recognizing and segmenting multiple organ regions on 3D CT images. CT image segmentation is one of the important task in medical image analysis and is still very challenging. Deep learning approaches have demonstrated the capability of scene recognition and semantic segmentation on nature images and have been used to address segmentation problems of medical images. Although several works showed promising results of CT image segmentation by using deep learning approaches, there is no comprehensive evaluation of segmentation performance of the deep learning on segmenting multiple organs on different portions of CT scans. In this paper, we evaluated and compared the segmentation performance of two different deep learning approaches that used 2D- and 3D deep convolutional neural networks (CNN) without- and with a pre-processing step. A conventional approach that presents the state-of-the-art performance of CT image segmentation without deep learning was also used for comparison. A dataset that includes 240 CT images scanned on different portions of human bodies was used for performance evaluation. The maximum number of 17 types of organ regions in each CT scan were segmented automatically and compared to the human annotations by using ratio of intersection over union (IU) as the criterion. The experimental results demonstrated the IUs of the segmentation results had a mean value of 79% and 67% by averaging 17 types of organs that segmented by a 3D- and 2D deep CNN, respectively. All the results of the deep learning approaches showed a better accuracy and robustness than the conventional segmentation method that used probabilistic atlas and graph-cut methods. The effectiveness and the usefulness of deep learning approaches were demonstrated for solving multiple organs segmentation problem on 3D CT images.

  5. Deep brain two-photon NIR fluorescence imaging for study of Alzheimer's disease

    Science.gov (United States)

    Chen, Congping; Liang, Zhuoyi; Zhou, Biao; Ip, Nancy Y.; Qu, Jianan Y.

    2018-02-01

    Amyloid depositions in the brain represent the characteristic hallmarks of Alzheimer's disease (AD) pathology. The abnormal accumulation of extracellular amyloid-beta (Aβ) and resulting toxic amyloid plaques are considered to be responsible for the clinical deficits including cognitive decline and memory loss. In vivo two-photon fluorescence imaging of amyloid plaques in live AD mouse model through a chronic imaging window (thinned skull or craniotomy) provides a mean to greatly facilitate the study of the pathological mechanism of AD owing to its high spatial resolution and long-term continuous monitoring. However, the imaging depth for amyloid plaques is largely limited to upper cortical layers due to the short-wavelength fluorescence emission of commonly used amyloid probes. In this work, we reported that CRANAD-3, a near-infrared (NIR) probe for amyloid species with excitation wavelength at 900 nm and emission wavelength around 650 nm, has great advantages over conventionally used probes and is well suited for twophoton deep imaging of amyloid plaques in AD mouse brain. Compared with a commonly used MeO-X04 probe, the imaging depth of CRANAD-3 is largely extended for open skull cranial window. Furthermore, by using two-photon excited fluorescence spectroscopic imaging, we characterized the intrinsic fluorescence of the "aging pigment" lipofuscin in vivo, which has distinct spectra from CRANAD-3 labeled plaques. This study reveals the unique potential of NIR probes for in vivo, high-resolution and deep imaging of brain amyloid in Alzheimer's disease.

  6. AggNet: Deep Learning From Crowds for Mitosis Detection in Breast Cancer Histology Images.

    Science.gov (United States)

    Albarqouni, Shadi; Baur, Christoph; Achilles, Felix; Belagiannis, Vasileios; Demirci, Stefanie; Navab, Nassir

    2016-05-01

    The lack of publicly available ground-truth data has been identified as the major challenge for transferring recent developments in deep learning to the biomedical imaging domain. Though crowdsourcing has enabled annotation of large scale databases for real world images, its application for biomedical purposes requires a deeper understanding and hence, more precise definition of the actual annotation task. The fact that expert tasks are being outsourced to non-expert users may lead to noisy annotations introducing disagreement between users. Despite being a valuable resource for learning annotation models from crowdsourcing, conventional machine-learning methods may have difficulties dealing with noisy annotations during training. In this manuscript, we present a new concept for learning from crowds that handle data aggregation directly as part of the learning process of the convolutional neural network (CNN) via additional crowdsourcing layer (AggNet). Besides, we present an experimental study on learning from crowds designed to answer the following questions. 1) Can deep CNN be trained with data collected from crowdsourcing? 2) How to adapt the CNN to train on multiple types of annotation datasets (ground truth and crowd-based)? 3) How does the choice of annotation and aggregation affect the accuracy? Our experimental setup involved Annot8, a self-implemented web-platform based on Crowdflower API realizing image annotation tasks for a publicly available biomedical image database. Our results give valuable insights into the functionality of deep CNN learning from crowd annotations and prove the necessity of data aggregation integration.

  7. CMU DeepLens: deep learning for automatic image-based galaxy-galaxy strong lens finding

    Science.gov (United States)

    Lanusse, François; Ma, Quanbin; Li, Nan; Collett, Thomas E.; Li, Chun-Liang; Ravanbakhsh, Siamak; Mandelbaum, Rachel; Póczos, Barnabás

    2018-01-01

    Galaxy-scale strong gravitational lensing can not only provide a valuable probe of the dark matter distribution of massive galaxies, but also provide valuable cosmological constraints, either by studying the population of strong lenses or by measuring time delays in lensed quasars. Due to the rarity of galaxy-scale strongly lensed systems, fast and reliable automated lens finding methods will be essential in the era of large surveys such as Large Synoptic Survey Telescope, Euclid and Wide-Field Infrared Survey Telescope. To tackle this challenge, we introduce CMU DeepLens, a new fully automated galaxy-galaxy lens finding method based on deep learning. This supervised machine learning approach does not require any tuning after the training step which only requires realistic image simulations of strongly lensed systems. We train and validate our model on a set of 20 000 LSST-like mock observations including a range of lensed systems of various sizes and signal-to-noise ratios (S/N). We find on our simulated data set that for a rejection rate of non-lenses of 99 per cent, a completeness of 90 per cent can be achieved for lenses with Einstein radii larger than 1.4 arcsec and S/N larger than 20 on individual g-band LSST exposures. Finally, we emphasize the importance of realistically complex simulations for training such machine learning methods by demonstrating that the performance of models of significantly different complexities cannot be distinguished on simpler simulations. We make our code publicly available at https://github.com/McWilliamsCenter/CMUDeepLens.

  8. Detecting deep venous thrombosis with limited flip angle gradient refocused MR imaging

    International Nuclear Information System (INIS)

    Spritzer, C.E.; Sussman, S.K.; Herfkens, R.J.; Blinder, R.A.; Saeed, M.; Vogler, J.A.; Baker, M.E.

    1987-01-01

    This study was undertaken to determine if limited flip angle gradient refocused MR pulse sequences (GRASS) could be used to accurately diagnose deep venous thrombosis (DVT). Sixteen patients (17 extremities) with possible DVT were prospectively evaluated with MR imaging and venography. Typical imaging parameters included a 16-msec echo time, 33-msec repetition time, 30 0 flip angle, and section thickness of 2 nex. MR imaging correctly disclosed the presence (nine cases) or absence (eight cases) of DVT. In one study, GRASS images overestimated the extent of clot due to slow venous blood flow. Subsequently the flip angle was varied to distinguish between venous thrombus and slow flow. When this technique was used, no false-positive studies occurred in the remaining patients. MR gradient refocused imaging appears to be an accurate aid for the diagnosis of DVT

  9. NutriNet: A Deep Learning Food and Drink Image Recognition System for Dietary Assessment.

    Science.gov (United States)

    Mezgec, Simon; Koroušić Seljak, Barbara

    2017-06-27

    Automatic food image recognition systems are alleviating the process of food-intake estimation and dietary assessment. However, due to the nature of food images, their recognition is a particularly challenging task, which is why traditional approaches in the field have achieved a low classification accuracy. Deep neural networks have outperformed such solutions, and we present a novel approach to the problem of food and drink image detection and recognition that uses a newly-defined deep convolutional neural network architecture, called NutriNet. This architecture was tuned on a recognition dataset containing 225,953 512 × 512 pixel images of 520 different food and drink items from a broad spectrum of food groups, on which we achieved a classification accuracy of 86 . 72 % , along with an accuracy of 94 . 47 % on a detection dataset containing 130 , 517 images. We also performed a real-world test on a dataset of self-acquired images, combined with images from Parkinson's disease patients, all taken using a smartphone camera, achieving a top-five accuracy of 55 % , which is an encouraging result for real-world images. Additionally, we tested NutriNet on the University of Milano-Bicocca 2016 (UNIMIB2016) food image dataset, on which we improved upon the provided baseline recognition result. An online training component was implemented to continually fine-tune the food and drink recognition model on new images. The model is being used in practice as part of a mobile app for the dietary assessment of Parkinson's disease patients.

  10. Two-Stage Approach to Image Classification by Deep Neural Networks

    Directory of Open Access Journals (Sweden)

    Ososkov Gennady

    2018-01-01

    Full Text Available The paper demonstrates the advantages of the deep learning networks over the ordinary neural networks on their comparative applications to image classifying. An autoassociative neural network is used as a standalone autoencoder for prior extraction of the most informative features of the input data for neural networks to be compared further as classifiers. The main efforts to deal with deep learning networks are spent for a quite painstaking work of optimizing the structures of those networks and their components, as activation functions, weights, as well as the procedures of minimizing their loss function to improve their performances and speed up their learning time. It is also shown that the deep autoencoders develop the remarkable ability for denoising images after being specially trained. Convolutional Neural Networks are also used to solve a quite actual problem of protein genetics on the example of the durum wheat classification. Results of our comparative study demonstrate the undoubted advantage of the deep networks, as well as the denoising power of the autoencoders. In our work we use both GPU and cloud services to speed up the calculations.

  11. Two-Stage Approach to Image Classification by Deep Neural Networks

    Science.gov (United States)

    Ososkov, Gennady; Goncharov, Pavel

    2018-02-01

    The paper demonstrates the advantages of the deep learning networks over the ordinary neural networks on their comparative applications to image classifying. An autoassociative neural network is used as a standalone autoencoder for prior extraction of the most informative features of the input data for neural networks to be compared further as classifiers. The main efforts to deal with deep learning networks are spent for a quite painstaking work of optimizing the structures of those networks and their components, as activation functions, weights, as well as the procedures of minimizing their loss function to improve their performances and speed up their learning time. It is also shown that the deep autoencoders develop the remarkable ability for denoising images after being specially trained. Convolutional Neural Networks are also used to solve a quite actual problem of protein genetics on the example of the durum wheat classification. Results of our comparative study demonstrate the undoubted advantage of the deep networks, as well as the denoising power of the autoencoders. In our work we use both GPU and cloud services to speed up the calculations.

  12. Deep features for efficient multi-biometric recognition with face and ear images

    Science.gov (United States)

    Omara, Ibrahim; Xiao, Gang; Amrani, Moussa; Yan, Zifei; Zuo, Wangmeng

    2017-07-01

    Recently, multimodal biometric systems have received considerable research interest in many applications especially in the fields of security. Multimodal systems can increase the resistance to spoof attacks, provide more details and flexibility, and lead to better performance and lower error rate. In this paper, we present a multimodal biometric system based on face and ear, and propose how to exploit the extracted deep features from Convolutional Neural Networks (CNNs) on the face and ear images to introduce more powerful discriminative features and robust representation ability for them. First, the deep features for face and ear images are extracted based on VGG-M Net. Second, the extracted deep features are fused by using a traditional concatenation and a Discriminant Correlation Analysis (DCA) algorithm. Third, multiclass support vector machine is adopted for matching and classification. The experimental results show that the proposed multimodal system based on deep features is efficient and achieves a promising recognition rate up to 100 % by using face and ear. In addition, the results indicate that the fusion based on DCA is superior to traditional fusion.

  13. Detection of Thermal Erosion Gullies from High-Resolution Images Using Deep Learning

    Science.gov (United States)

    Huang, L.; Liu, L.; Jiang, L.; Zhang, T.; Sun, Y.

    2017-12-01

    Thermal erosion gullies, one type of thermokarst landforms, develop due to thawing of ice-rich permafrost. Mapping the location and extent of thermal erosion gullies can help understand the spatial distribution of thermokarst landforms and their temporal evolution. Remote sensing images provide an effective way for mapping thermokarst landforms, especially thermokarst lakes. However, thermal erosion gullies are challenging to map from remote sensing images due to their small sizes and significant variations in geometric/radiometric properties. It is feasible to manually identify these features, as a few previous studies have carried out. However manual methods are labor-intensive, therefore, cannot be used for a large study area. In this work, we conduct automatic mapping of thermal erosion gullies from high-resolution images by using Deep Learning. Our study area is located in Eboling Mountain (Qinghai, China). Within a 6 km2 peatland area underlain by ice-rich permafrost, at least 20 thermal erosional gullies are well developed. The image used is a 15-cm-resolution Digital Orthophoto Map (DOM) generated in July 2016. First, we extracted 14 gully patches and ten non-gully patches as training data. And we performed image augmentation. Next, we fine-tuned the pre-trained model of DeepLab, a deep-learning algorithm for semantic image segmentation based on Deep Convolutional Neural Networks. Then, we performed inference on the whole DOM and obtained intermediate results in forms of polygons for all identified gullies. At last, we removed misidentified polygons based on a few pre-set criteria on the size and shape of each polygon. Our final results include 42 polygons. Validated against field measurements using GPS, most of the gullies are detected correctly. There are 20 false detections due to the small number and low quality of training images. We also found three new gullies that missed in the field observations. This study shows that (1) despite a challenging

  14. Ship Detection and Classification on Optical Remote Sensing Images Using Deep Learning

    Directory of Open Access Journals (Sweden)

    Liu Ying

    2017-01-01

    Full Text Available Ship detection and classification is critical for national maritime security and national defense. Although some SAR (Synthetic Aperture Radar image-based ship detection approaches have been proposed and used, they are not able to satisfy the requirement of real-world applications as the number of SAR sensors is limited, the resolution is low, and the revisit cycle is long. As massive optical remote sensing images of high resolution are available, ship detection and classification on theses images is becoming a promising technique, and has attracted great attention on applications including maritime security and traffic control. Some digital image processing methods have been proposed to detect ships in optical remote sensing images, but most of them face difficulty in terms of accuracy, performance and complexity. Recently, an autoencoder-based deep neural network with extreme learning machine was proposed, but it cannot meet the requirement of real-world applications as it only works with simple and small-scaled data sets. Therefore, in this paper, we propose a novel ship detection and classification approach which utilizes deep convolutional neural network (CNN as the ship classifier. The performance of our proposed ship detection and classification approach was evaluated on a set of images downloaded from Google Earth at the resolution 0.5m. 99% detection accuracy and 95% classification accuracy were achieved. In model training, 75× speedup is achieved on 1 Nvidia Titanx GPU.

  15. DEEP U BAND AND R IMAGING OF GOODS-SOUTH: OBSERVATIONS, DATA REDUCTION AND FIRST RESULTS ,

    International Nuclear Information System (INIS)

    Nonino, M.; Cristiani, S.; Vanzella, E.; Dickinson, M.; Reddy, N.; Rosati, P.; Grazian, A.; Giavalisco, M.; Kuntschner, H.; Fosbury, R. A. E.; Daddi, E.; Cesarsky, C.

    2009-01-01

    We present deep imaging in the U band covering an area of 630 arcmin 2 centered on the southern field of the Great Observatories Origins Deep Survey (GOODS). The data were obtained with the VIMOS instrument at the European Southern Observatory (ESO) Very Large Telescope. The final images reach a magnitude limit U lim ∼ 29.8 (AB, 1σ, in a 1'' radius aperture), and have good image quality, with full width at half-maximum ∼0.''8. They are significantly deeper than previous U-band images available for the GOODS fields, and better match the sensitivity of other multiwavelength GOODS photometry. The deeper U-band data yield significantly improved photometric redshifts, especially in key redshift ranges such as 2 lim ∼ 29 (AB, 1σ, 1'' radius aperture), and image quality ∼0.''75. We discuss the strategies for the observations and data reduction, and present the first results from the analysis of the co-added images.

  16. Spatial Organization and Molecular Correlation of Tumor-Infiltrating Lymphocytes Using Deep Learning on Pathology Images

    Directory of Open Access Journals (Sweden)

    Joel Saltz

    2018-04-01

    Full Text Available Summary: Beyond sample curation and basic pathologic characterization, the digitized H&E-stained images of TCGA samples remain underutilized. To highlight this resource, we present mappings of tumor-infiltrating lymphocytes (TILs based on H&E images from 13 TCGA tumor types. These TIL maps are derived through computational staining using a convolutional neural network trained to classify patches of images. Affinity propagation revealed local spatial structure in TIL patterns and correlation with overall survival. TIL map structural patterns were grouped using standard histopathological parameters. These patterns are enriched in particular T cell subpopulations derived from molecular measures. TIL densities and spatial structure were differentially enriched among tumor types, immune subtypes, and tumor molecular subtypes, implying that spatial infiltrate state could reflect particular tumor cell aberration states. Obtaining spatial lymphocytic patterns linked to the rich genomic characterization of TCGA samples demonstrates one use for the TCGA image archives with insights into the tumor-immune microenvironment. : Tumor-infiltrating lymphocytes (TILs were identified from standard pathology cancer images by a deep-learning-derived “computational stain” developed by Saltz et al. They processed 5,202 digital images from 13 cancer types. Resulting TIL maps were correlated with TCGA molecular data, relating TIL content to survival, tumor subtypes, and immune profiles. Keywords: digital pathology, immuno-oncology, machine learning, lymphocytes, tumor microenvironment, deep learning, tumor-infiltrating lymphocytes, artificial intelligence, bioinformatics, computer vision

  17. Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues

    Science.gov (United States)

    Tzoumas, Stratis; Nunes, Antonio; Olefir, Ivan; Stangl, Stefan; Symvoulidis, Panagiotis; Glasl, Sarah; Bayer, Christine; Multhoff, Gabriele; Ntziachristos, Vasilis

    2016-06-01

    Light propagating in tissue attains a spectrum that varies with location due to wavelength-dependent fluence attenuation, an effect that causes spectral corruption. Spectral corruption has limited the quantification accuracy of optical and optoacoustic spectroscopic methods, and impeded the goal of imaging blood oxygen saturation (sO2) deep in tissues; a critical goal for the assessment of oxygenation in physiological processes and disease. Here we describe light fluence in the spectral domain and introduce eigenspectra multispectral optoacoustic tomography (eMSOT) to account for wavelength-dependent light attenuation, and estimate blood sO2 within deep tissue. We validate eMSOT in simulations, phantoms and animal measurements and spatially resolve sO2 in muscle and tumours, validating our measurements with histology data. eMSOT shows substantial sO2 accuracy enhancement over previous optoacoustic methods, potentially serving as a valuable tool for imaging tissue pathophysiology.

  18. Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues.

    Science.gov (United States)

    Tzoumas, Stratis; Nunes, Antonio; Olefir, Ivan; Stangl, Stefan; Symvoulidis, Panagiotis; Glasl, Sarah; Bayer, Christine; Multhoff, Gabriele; Ntziachristos, Vasilis

    2016-06-30

    Light propagating in tissue attains a spectrum that varies with location due to wavelength-dependent fluence attenuation, an effect that causes spectral corruption. Spectral corruption has limited the quantification accuracy of optical and optoacoustic spectroscopic methods, and impeded the goal of imaging blood oxygen saturation (sO2) deep in tissues; a critical goal for the assessment of oxygenation in physiological processes and disease. Here we describe light fluence in the spectral domain and introduce eigenspectra multispectral optoacoustic tomography (eMSOT) to account for wavelength-dependent light attenuation, and estimate blood sO2 within deep tissue. We validate eMSOT in simulations, phantoms and animal measurements and spatially resolve sO2 in muscle and tumours, validating our measurements with histology data. eMSOT shows substantial sO2 accuracy enhancement over previous optoacoustic methods, potentially serving as a valuable tool for imaging tissue pathophysiology.

  19. Ship detection in optical remote sensing images based on deep convolutional neural networks

    Science.gov (United States)

    Yao, Yuan; Jiang, Zhiguo; Zhang, Haopeng; Zhao, Danpei; Cai, Bowen

    2017-10-01

    Automatic ship detection in optical remote sensing images has attracted wide attention for its broad applications. Major challenges for this task include the interference of cloud, wave, wake, and the high computational expenses. We propose a fast and robust ship detection algorithm to solve these issues. The framework for ship detection is designed based on deep convolutional neural networks (CNNs), which provide the accurate locations of ship targets in an efficient way. First, the deep CNN is designed to extract features. Then, a region proposal network (RPN) is applied to discriminate ship targets and regress the detection bounding boxes, in which the anchors are designed by intrinsic shape of ship targets. Experimental results on numerous panchromatic images demonstrate that, in comparison with other state-of-the-art ship detection methods, our method is more efficient and achieves higher detection accuracy and more precise bounding boxes in different complex backgrounds.

  20. Fusion of shallow and deep features for classification of high-resolution remote sensing images

    Science.gov (United States)

    Gao, Lang; Tian, Tian; Sun, Xiao; Li, Hang

    2018-02-01

    Effective spectral and spatial pixel description plays a significant role for the classification of high resolution remote sensing images. Current approaches of pixel-based feature extraction are of two main kinds: one includes the widelyused principal component analysis (PCA) and gray level co-occurrence matrix (GLCM) as the representative of the shallow spectral and shape features, and the other refers to the deep learning-based methods which employ deep neural networks and have made great promotion on classification accuracy. However, the former traditional features are insufficient to depict complex distribution of high resolution images, while the deep features demand plenty of samples to train the network otherwise over fitting easily occurs if only limited samples are involved in the training. In view of the above, we propose a GLCM-based convolution neural network (CNN) approach to extract features and implement classification for high resolution remote sensing images. The employment of GLCM is able to represent the original images and eliminate redundant information and undesired noises. Meanwhile, taking shallow features as the input of deep network will contribute to a better guidance and interpretability. In consideration of the amount of samples, some strategies such as L2 regularization and dropout methods are used to prevent over-fitting. The fine-tuning strategy is also used in our study to reduce training time and further enhance the generalization performance of the network. Experiments with popular data sets such as PaviaU data validate that our proposed method leads to a performance improvement compared to individual involved approaches.

  1. Deep Convolutional Neural Networks for Multi-Modality Isointense Infant Brain Image Segmentation

    Science.gov (United States)

    Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang

    2015-01-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multimodality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. PMID:25562829

  2. Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification

    Directory of Open Access Journals (Sweden)

    Srdjan Sladojevic

    2016-01-01

    Full Text Available The latest generation of convolutional neural networks (CNNs has achieved impressive results in the field of image classification. This paper is concerned with a new approach to the development of plant disease recognition model, based on leaf image classification, by the use of deep convolutional networks. Novel way of training and the methodology used facilitate a quick and easy system implementation in practice. The developed model is able to recognize 13 different types of plant diseases out of healthy leaves, with the ability to distinguish plant leaves from their surroundings. According to our knowledge, this method for plant disease recognition has been proposed for the first time. All essential steps required for implementing this disease recognition model are fully described throughout the paper, starting from gathering images in order to create a database, assessed by agricultural experts. Caffe, a deep learning framework developed by Berkley Vision and Learning Centre, was used to perform the deep CNN training. The experimental results on the developed model achieved precision between 91% and 98%, for separate class tests, on average 96.3%.

  3. Early image acquisition after administration of indium-111 platelets in clinically suspected deep venous thrombosis

    International Nuclear Information System (INIS)

    Farlow, D.C.; Ezekowitz, M.D.; Rao, S.R.; Martinez, C.; Denny, D.F.; Morse, S.S.; Decho, J.S.; Wackers, F.; Strauss, E.

    1989-01-01

    Indium-111 platelet scintigraphy accurately detects acute deep venous thrombosis in asymptomatic high-risk patients and may be used as a surveillance test. However, its value in symptomatic patients and its accuracy early after platelet injection are not satisfactorily established. The latter is important for timely institution of therapy. Accordingly, 65 patients (67 limbs) with suspected deep venous thrombosis (symptom duration 8 +/- 10 days, mean +/- standard deviation) were prospectively studied with platelet scintigraphy and contrast venography. Platelets were labeled with 405 +/- 101 mCi indium-111 oxine. The labeling efficiency was 80 +/- 10%. All images were acquired within 120 minutes after intravenous administration of the platelet suspension. Both platelet scintigraphy and venography were interpreted independently by 2 blinded observers (for each technique). Five separate analyses were performed. Each scintigraphic reader was compared to each venographic reader. A fifth analysis--consisting of readings with blinded agreement of both readings of the platelet scans and both readings of the venograms--was performed. Interobserver agreement was 92% for venography and 79% for scintigraphy. Excluding anticoagulated patients, the sensitivity of platelet scintigraphy was between 38 and 46% and the specificity was between 92 and 100%. Thus, early imaging of labeled platelets for the diagnosis of symptomatic deep venous thrombosis carries a high specificity but a much lower sensitivity. It is speculated that the low sensitivity is related to the inactivity of the thrombus. This may suggest that early imaging will only be useful in patients whose symptoms are of recent onset

  4. Evaluation of a deep learning architecture for MR imaging prediction of ATRX in glioma patients

    Science.gov (United States)

    Korfiatis, Panagiotis; Kline, Timothy L.; Erickson, Bradley J.

    2018-02-01

    Predicting mutation/loss of alpha-thalassemia/mental retardation syndrome X-linked (ATRX) gene utilizing MR imaging is of high importance since it is a predictor of response and prognosis in brain tumors. In this study, we compare a deep neural network approach based on a residual deep neural network (ResNet) architecture and one based on a classical machine learning approach and evaluate their ability in predicting ATRX mutation status without the need for a distinct tumor segmentation step. We found that the ResNet50 (50 layers) architecture, pre trained on ImageNet data was the best performing model, achieving an accuracy of 0.91 for the test set (classification of a slice as no tumor, ATRX mutated, or mutated) in terms of f1 score in a test set of 35 cases. The SVM classifier achieved 0.63 for differentiating the Flair signal abnormality regions from the test patients based on their mutation status. We report a method that alleviates the need for extensive preprocessing and acts as a proof of concept that deep neural network architectures can be used to predict molecular biomarkers from routine medical images.

  5. Attenuation correction for brain PET imaging using deep neural network based on dixon and ZTE MR images.

    Science.gov (United States)

    Gong, Kuang; Yang, Jaewon; Kim, Kyungsang; El Fakhri, Georges; Seo, Youngho; Li, Quanzheng

    2018-05-23

    Positron Emission Tomography (PET) is a functional imaging modality widely used in neuroscience studies. To obtain meaningful quantitative results from PET images, attenuation correction is necessary during image reconstruction. For PET/MR hybrid systems, PET attenuation is challenging as Magnetic Resonance (MR) images do not reflect attenuation coefficients directly. To address this issue, we present deep neural network methods to derive the continuous attenuation coefficients for brain PET imaging from MR images. With only Dixon MR images as the network input, the existing U-net structure was adopted and analysis using forty patient data sets shows it is superior than other Dixon based methods. When both Dixon and zero echo time (ZTE) images are available, we have proposed a modified U-net structure, named GroupU-net, to efficiently make use of both Dixon and ZTE information through group convolution modules when the network goes deeper. Quantitative analysis based on fourteen real patient data sets demonstrates that both network approaches can perform better than the standard methods, and the proposed network structure can further reduce the PET quantification error compared to the U-net structure. © 2018 Institute of Physics and Engineering in Medicine.

  6. 3D printed optical phantoms and deep tissue imaging for in vivo applications including oral surgery

    Science.gov (United States)

    Bentz, Brian Z.; Costas, Alfonso; Gaind, Vaibhav; Garcia, Jose M.; Webb, Kevin J.

    2017-03-01

    Progress in developing optical imaging for biomedical applications requires customizable and often complex objects known as "phantoms" for testing, evaluation, and calibration. This work demonstrates that 3D printing is an ideal method for fabricating such objects, allowing intricate inhomogeneities to be placed at exact locations in complex or anatomically realistic geometries, a process that is difficult or impossible using molds. We show printed mouse phantoms we have fabricated for developing deep tissue fluorescence imaging methods, and measurements of both their optical and mechanical properties. Additionally, we present a printed phantom of the human mouth that we use to develop an artery localization method to assist in oral surgery.

  7. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    Science.gov (United States)

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  8. Deep Into the Fibers! Postmortem Diffusion Tensor Imaging in Forensic Radiology.

    Science.gov (United States)

    Flach, Patricia Mildred; Schroth, Sarah; Schweitzer, Wolf; Ampanozi, Garyfalia; Slotboom, Johannes; Kiefer, Claus; Germerott, Tanja; Thali, Michael J; El-Koussy, Marwan

    2015-09-01

    In traumatic brain injury, diffusion-weighted and diffusion tensor imaging of the brain are essential techniques for determining the pathology sustained and the outcome. Postmortem cross-sectional imaging is an established adjunct to forensic autopsy in death investigation. The purpose of this prospective study was to evaluate postmortem diffusion tensor imaging in forensics for its feasibility, influencing factors and correlation to the cause of death compared with autopsy. Postmortem computed tomography, magnetic resonance imaging, and diffusion tensor imaging with fiber tracking were performed in 10 deceased subjects. The Likert scale grading of colored fractional anisotropy maps was correlated to the body temperature and intracranial pathology to assess the diagnostic feasibility of postmortem diffusion tensor imaging and fiber tracking. Optimal fiber tracking (>15,000 fiber tracts) was achieved with a body temperature at 10°C. Likert scale grading showed no linear correlation (P > 0.7) to fiber tract counts. No statistically significant correlation between total fiber count and postmortem interval could be observed (P = 0.122). Postmortem diffusion tensor imaging and fiber tracking allowed for radiological diagnosis in cases with shearing injuries but was impaired in cases with pneumencephalon and intracerebral mass hemorrhage. Postmortem diffusion tensor imaging with fiber tracking provides an exceptional in situ insight "deep into the fibers" of the brain with diagnostic benefit in traumatic brain injury and axonal injuries in the assessment of the underlying cause of death, considering influencing factors for optimal imaging technique.

  9. Robust Single Image Super-Resolution via Deep Networks With Sparse Prior.

    Science.gov (United States)

    Liu, Ding; Wang, Zhaowen; Wen, Bihan; Yang, Jianchao; Han, Wei; Huang, Thomas S

    2016-07-01

    Single image super-resolution (SR) is an ill-posed problem, which tries to recover a high-resolution image from its low-resolution observation. To regularize the solution of the problem, previous methods have focused on designing good priors for natural images, such as sparse representation, or directly learning the priors from a large data set with models, such as deep neural networks. In this paper, we argue that domain expertise from the conventional sparse coding model can be combined with the key ingredients of deep learning to achieve further improved results. We demonstrate that a sparse coding model particularly designed for SR can be incarnated as a neural network with the merit of end-to-end optimization over training data. The network has a cascaded structure, which boosts the SR performance for both fixed and incremental scaling factors. The proposed training and testing schemes can be extended for robust handling of images with additional degradation, such as noise and blurring. A subjective assessment is conducted and analyzed in order to thoroughly evaluate various SR techniques. Our proposed model is tested on a wide range of images, and it significantly outperforms the existing state-of-the-art methods for various scaling factors both quantitatively and perceptually.

  10. Imaging Features of Superficial and Deep Fibromatoses in the Adult Population

    Directory of Open Access Journals (Sweden)

    Eric A. Walker

    2012-01-01

    Full Text Available The fibromatoses are a group of benign fibroblastic proliferations that vary from benign to intermediate in biological behavior. This article will discuss imaging characteristics and patient demographics of the adult type superficial (fascial and deep (musculoaponeurotic fibromatoses. The imaging appearance of these lesions can be characteristic (particularly when using magnetic resonance imaging. Palmar fibromatosis demonstrates multiple nodular or band-like soft tissue masses arising from the proximal palmar aponeurosis and extending along the subcutaneous tissues of the finger in parallel to the flexor tendons. T1 and T2-weighted signal intensity can vary from low (higher collagen to intermediate (higher cellularity, similar to the other fibromatoses. Plantar fibromatosis manifests as superficial lesions along the deep plantar aponeurosis, which typically blend with the adjacent plantar musculature. Linear tails of extension (“fascial tail sign” along the aponeurosis are frequent. Extraabdominal and abdominal wall fibromatosis often appear as a heterogeneous lesion with low signal intensity bands on all pulse sequences and linear fascial extensions (“fascial tail” sign with MR imaging. Mesenteric fibromatosis usually demonstrates a soft tissue density on CT with radiating strands projecting into the adjacent mesenteric fat. When imaging is combined with patient demographics, a diagnosis can frequently be obtained.

  11. PIV-DCNN: cascaded deep convolutional neural networks for particle image velocimetry

    Science.gov (United States)

    Lee, Yong; Yang, Hua; Yin, Zhouping

    2017-12-01

    Velocity estimation (extracting the displacement vector information) from the particle image pairs is of critical importance for particle image velocimetry. This problem is mostly transformed into finding the sub-pixel peak in a correlation map. To address the original displacement extraction problem, we propose a different evaluation scheme (PIV-DCNN) with four-level regression deep convolutional neural networks. At each level, the networks are trained to predict a vector from two input image patches. The low-level network is skilled at large displacement estimation and the high- level networks are devoted to improving the accuracy. Outlier replacement and symmetric window offset operation glue the well- functioning networks in a cascaded manner. Through comparison with the standard PIV methods (one-pass cross-correlation method, three-pass window deformation), the practicability of the proposed PIV-DCNN is verified by the application to a diversity of synthetic and experimental PIV images.

  12. In Vivo Deep Tissue Fluorescence and Magnetic Imaging Employing Hybrid Nanostructures.

    Science.gov (United States)

    Ortgies, Dirk H; de la Cueva, Leonor; Del Rosal, Blanca; Sanz-Rodríguez, Francisco; Fernández, Nuria; Iglesias-de la Cruz, M Carmen; Salas, Gorka; Cabrera, David; Teran, Francisco J; Jaque, Daniel; Martín Rodríguez, Emma

    2016-01-20

    Breakthroughs in nanotechnology have made it possible to integrate different nanoparticles in one single hybrid nanostructure (HNS), constituting multifunctional nanosized sensors, carriers, and probes with great potential in the life sciences. In addition, such nanostructures could also offer therapeutic capabilities to achieve a wider variety of multifunctionalities. In this work, the encapsulation of both magnetic and infrared emitting nanoparticles into a polymeric matrix leads to a magnetic-fluorescent HNS with multimodal magnetic-fluorescent imaging abilities. The magnetic-fluorescent HNS are capable of simultaneous magnetic resonance imaging and deep tissue infrared fluorescence imaging, overcoming the tissue penetration limits of classical visible-light based optical imaging as reported here in living mice. Additionally, their applicability for magnetic heating in potential hyperthermia treatments is assessed.

  13. Intelligent Image Recognition System for Marine Fouling Using Softmax Transfer Learning and Deep Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    C. S. Chin

    2017-01-01

    Full Text Available The control of biofouling on marine vessels is challenging and costly. Early detection before hull performance is significantly affected is desirable, especially if “grooming” is an option. Here, a system is described to detect marine fouling at an early stage of development. In this study, an image of fouling can be transferred wirelessly via a mobile network for analysis. The proposed system utilizes transfer learning and deep convolutional neural network (CNN to perform image recognition on the fouling image by classifying the detected fouling species and the density of fouling on the surface. Transfer learning using Google’s Inception V3 model with Softmax at last layer was carried out on a fouling database of 10 categories and 1825 images. Experimental results gave acceptable accuracies for fouling detection and recognition.

  14. Deep learning and three-compartment breast imaging in breast cancer diagnosis

    Science.gov (United States)

    Drukker, Karen; Huynh, Benjamin Q.; Giger, Maryellen L.; Malkov, Serghei; Avila, Jesus I.; Fan, Bo; Joe, Bonnie; Kerlikowske, Karla; Drukteinis, Jennifer S.; Kazemi, Leila; Pereira, Malesa M.; Shepherd, John

    2017-03-01

    We investigated whether deep learning has potential to aid in the diagnosis of breast cancer when applied to mammograms and biologic tissue composition images derived from three-compartment (3CB) imaging. The dataset contained diagnostic mammograms and 3CB images (water, lipid, and protein content) of biopsy-sampled BIRADS 4 and 5 lesions in 195 patients. In 58 patients, the lesion manifested as a mass (13 malignant vs. 45 benign), in 87 as microcalcifications (19 vs. 68), and in 56 as (focal) asymmetry or architectural distortion (11 vs. 45). Six patients had both a mass and calcifications. For each mammogram and corresponding 3CB images, a 128x128 region of interest containing the lesion was selected by an expert radiologist and used directly as input to a deep learning method pretrained on a very large independent set of non-medical images. We used a nested leave-one-out-by-case (patient) model selection and classification protocol. The area under the ROC curve (AUC) for the task of distinguishing between benign and malignant lesions was used as performance metric. For the cases with mammographic masses, the AUC increased from 0.83 (mammograms alone) to 0.89 (mammograms+3CB, p=.162). For the microcalcification and asymmetry/architectural distortion cases the AUC increased from 0.84 to 0.91 (p=.116) and from 0.61 to 0.87 (p=.006), respectively. Our results indicate great potential for the application of deep learning methods in the diagnosis of breast cancer and additional knowledge of the biologic tissue composition appeared to improve performance, especially for lesions mammographically manifesting as asymmetries or architectural distortions.

  15. Evaluation of electrode position in deep brain stimulation by image fusion (MRI and CT)

    Energy Technology Data Exchange (ETDEWEB)

    Barnaure, I.; Lovblad, K.O.; Vargas, M.I. [Geneva University Hospital, Department of Neuroradiology, Geneva 14 (Switzerland); Pollak, P.; Horvath, J.; Boex, C.; Burkhard, P. [Geneva University Hospital, Department of Neurology, Geneva (Switzerland); Momjian, S. [Geneva University Hospital, Department of Neurosurgery, Geneva (Switzerland); Remuinan, J. [Geneva University Hospital, Department of Radiology, Geneva (Switzerland)

    2015-09-15

    Imaging has an essential role in the evaluation of correct positioning of electrodes implanted for deep brain stimulation (DBS). Although MRI offers superior anatomic visualization of target sites, there are safety concerns in patients with implanted material; imaging guidelines are inconsistent and vary. The fusion of postoperative CT with preoperative MRI images can be an alternative for the assessment of electrode positioning. The purpose of this study was to assess the accuracy of measurements realized on fused images (acquired without a stereotactic frame) using a manufacturer-provided software. Data from 23 Parkinson's disease patients who underwent bilateral electrode placement for subthalamic nucleus (STN) DBS were acquired. Preoperative high-resolution T2-weighted sequences at 3 T, and postoperative CT series were fused using a commercially available software. Electrode tip position was measured on the obtained images in three directions (in relation to the midline, the AC-PC line and an AC-PC line orthogonal, respectively) and assessed in relation to measures realized on postoperative 3D T1 images acquired at 1.5 T. Mean differences between measures carried out on fused images and on postoperative MRI lay between 0.17 and 0.97 mm. Fusion of CT and MRI images provides a safe and fast technique for postoperative assessment of electrode position in DBS. (orig.)

  16. Hyperspectral Image Enhancement and Mixture Deep-Learning Classification of Corneal Epithelium Injuries.

    Science.gov (United States)

    Noor, Siti Salwa Md; Michael, Kaleena; Marshall, Stephen; Ren, Jinchang

    2017-11-16

    In our preliminary study, the reflectance signatures obtained from hyperspectral imaging (HSI) of normal and abnormal corneal epithelium tissues of porcine show similar morphology with subtle differences. Here we present image enhancement algorithms that can be used to improve the interpretability of data into clinically relevant information to facilitate diagnostics. A total of 25 corneal epithelium images without the application of eye staining were used. Three image feature extraction approaches were applied for image classification: (i) image feature classification from histogram using a support vector machine with a Gaussian radial basis function (SVM-GRBF); (ii) physical image feature classification using deep-learning Convolutional Neural Networks (CNNs) only; and (iii) the combined classification of CNNs and SVM-Linear. The performance results indicate that our chosen image features from the histogram and length-scale parameter were able to classify with up to 100% accuracy; particularly, at CNNs and CNNs-SVM, by employing 80% of the data sample for training and 20% for testing. Thus, in the assessment of corneal epithelium injuries, HSI has high potential as a method that could surpass current technologies regarding speed, objectivity, and reliability.

  17. Thyroid Nodule Classification in Ultrasound Images by Fine-Tuning Deep Convolutional Neural Network.

    Science.gov (United States)

    Chi, Jianning; Walia, Ekta; Babyn, Paul; Wang, Jimmy; Groot, Gary; Eramian, Mark

    2017-08-01

    With many thyroid nodules being incidentally detected, it is important to identify as many malignant nodules as possible while excluding those that are highly likely to be benign from fine needle aspiration (FNA) biopsies or surgeries. This paper presents a computer-aided diagnosis (CAD) system for classifying thyroid nodules in ultrasound images. We use deep learning approach to extract features from thyroid ultrasound images. Ultrasound images are pre-processed to calibrate their scale and remove the artifacts. A pre-trained GoogLeNet model is then fine-tuned using the pre-processed image samples which leads to superior feature extraction. The extracted features of the thyroid ultrasound images are sent to a Cost-sensitive Random Forest classifier to classify the images into "malignant" and "benign" cases. The experimental results show the proposed fine-tuned GoogLeNet model achieves excellent classification performance, attaining 98.29% classification accuracy, 99.10% sensitivity and 93.90% specificity for the images in an open access database (Pedraza et al. 16), while 96.34% classification accuracy, 86% sensitivity and 99% specificity for the images in our local health region database.

  18. Histopathological Breast Cancer Image Classification by Deep Neural Network Techniques Guided by Local Clustering.

    Science.gov (United States)

    Nahid, Abdullah-Al; Mehrabi, Mohamad Ali; Kong, Yinan

    2018-01-01

    Breast Cancer is a serious threat and one of the largest causes of death of women throughout the world. The identification of cancer largely depends on digital biomedical photography analysis such as histopathological images by doctors and physicians. Analyzing histopathological images is a nontrivial task, and decisions from investigation of these kinds of images always require specialised knowledge. However, Computer Aided Diagnosis (CAD) techniques can help the doctor make more reliable decisions. The state-of-the-art Deep Neural Network (DNN) has been recently introduced for biomedical image analysis. Normally each image contains structural and statistical information. This paper classifies a set of biomedical breast cancer images (BreakHis dataset) using novel DNN techniques guided by structural and statistical information derived from the images. Specifically a Convolutional Neural Network (CNN), a Long-Short-Term-Memory (LSTM), and a combination of CNN and LSTM are proposed for breast cancer image classification. Softmax and Support Vector Machine (SVM) layers have been used for the decision-making stage after extracting features utilising the proposed novel DNN models. In this experiment the best Accuracy value of 91.00% is achieved on the 200x dataset, the best Precision value 96.00% is achieved on the 40x dataset, and the best F -Measure value is achieved on both the 40x and 100x datasets.

  19. Applications of two-photon fluorescence microscopy in deep-tissue imaging

    Science.gov (United States)

    Dong, Chen-Yuan; Yu, Betty; Hsu, Lily L.; Kaplan, Peter D.; Blankschstein, D.; Langer, Robert; So, Peter T. C.

    2000-07-01

    Based on the non-linear excitation of fluorescence molecules, two-photon fluorescence microscopy has become a significant new tool for biological imaging. The point-like excitation characteristic of this technique enhances image quality by the virtual elimination of off-focal fluorescence. Furthermore, sample photodamage is greatly reduced because fluorescence excitation is limited to the focal region. For deep tissue imaging, two-photon microscopy has the additional benefit in the greatly improved imaging depth penetration. Since the near- infrared laser sources used in two-photon microscopy scatter less than their UV/glue-green counterparts, in-depth imaging of highly scattering specimen can be greatly improved. In this work, we will present data characterizing both the imaging characteristics (point-spread-functions) and tissue samples (skin) images using this novel technology. In particular, we will demonstrate how blind deconvolution can be used further improve two-photon image quality and how this technique can be used to study mechanisms of chemically-enhanced, transdermal drug delivery.

  20. Ultrafast ultrasound localization microscopy for deep super-resolution vascular imaging

    Science.gov (United States)

    Errico, Claudia; Pierre, Juliette; Pezet, Sophie; Desailly, Yann; Lenkei, Zsolt; Couture, Olivier; Tanter, Mickael

    2015-11-01

    Non-invasive imaging deep into organs at microscopic scales remains an open quest in biomedical imaging. Although optical microscopy is still limited to surface imaging owing to optical wave diffusion and fast decorrelation in tissue, revolutionary approaches such as fluorescence photo-activated localization microscopy led to a striking increase in resolution by more than an order of magnitude in the last decade. In contrast with optics, ultrasonic waves propagate deep into organs without losing their coherence and are much less affected by in vivo decorrelation processes. However, their resolution is impeded by the fundamental limits of diffraction, which impose a long-standing trade-off between resolution and penetration. This limits clinical and preclinical ultrasound imaging to a sub-millimetre scale. Here we demonstrate in vivo that ultrasound imaging at ultrafast frame rates (more than 500 frames per second) provides an analogue to optical localization microscopy by capturing the transient signal decorrelation of contrast agents—inert gas microbubbles. Ultrafast ultrasound localization microscopy allowed both non-invasive sub-wavelength structural imaging and haemodynamic quantification of rodent cerebral microvessels (less than ten micrometres in diameter) more than ten millimetres below the tissue surface, leading to transcranial whole-brain imaging within short acquisition times (tens of seconds). After intravenous injection, single echoes from individual microbubbles were detected through ultrafast imaging. Their localization, not limited by diffraction, was accumulated over 75,000 images, yielding 1,000,000 events per coronal plane and statistically independent pixels of ten micrometres in size. Precise temporal tracking of microbubble positions allowed us to extract accurately in-plane velocities of the blood flow with a large dynamic range (from one millimetre per second to several centimetres per second). These results pave the way for deep non

  1. Assessment of voluntary deep inspiration breath-hold with CINE imaging for breast radiotherapy.

    Science.gov (United States)

    Estoesta, Reuben Patrick; Attwood, Lani; Naehrig, Diana; Claridge-Mackonis, Elizabeth; Odgers, David; Martin, Darren; Pham, Melissa; Toohey, Joanne; Carroll, Susan

    2017-10-01

    Deep Inspiration Breath-Hold (DIBH) techniques for breast cancer radiation therapy (RT) have reduced cardiac dose compared to Free Breathing (FB). Recently, a voluntary deep inspiration breath-hold (vDIBH) technique was established using in-room lasers and skin tattoos to monitor breath-hold. An in-house quality assessment of positional reproducibility during RT delivery with vDIBH in patients with left-sided breast cancer was evaluated. The electronic portal imaging device (EPID) was used in cinematographic (CINE) mode to capture a sequence of images during beam delivery. Weekly CINE images were retrospectively assessed for 20 left-sided breast cancer patients receiving RT in vDIBH, and compared with CINE images of 20 patients treated in FB. The intra-beam motion was assessed and the distance from the beam central axis (CA) to the internal chest wall (ICW) was measured on each CINE image. These were then compared to the planned distance on digitally reconstructed radiograph (DRR). The maximum intra-beam motion for any one patient measurement was 0.30 cm for vDIBH and 0.20 cm for FB. The mean difference between the distance from the CA to ICW on DRR and the equivalent distance on CINE imaging (as treated) was 0.28 cm (SD 0.17) for vDIBH patients and 0.25 cm (SD 0.14) for FB patients (P = 0.458). The measured values were comparable for patients undergoing RT in vDIBH, and for those in FB. This quality assessment showed that using in-room lasers and skin tattoos to independently monitor breath-hold in vDIBH as detected by 'on-treatment' CINE imaging is safe and effective. © 2017 The Royal Australian and New Zealand College of Radiologists.

  2. Early detection of lung cancer from CT images: nodule segmentation and classification using deep learning

    Science.gov (United States)

    Sharma, Manu; Bhatt, Jignesh S.; Joshi, Manjunath V.

    2018-04-01

    Lung cancer is one of the most abundant causes of the cancerous deaths worldwide. It has low survival rate mainly due to the late diagnosis. With the hardware advancements in computed tomography (CT) technology, it is now possible to capture the high resolution images of lung region. However, it needs to be augmented by efficient algorithms to detect the lung cancer in the earlier stages using the acquired CT images. To this end, we propose a two-step algorithm for early detection of lung cancer. Given the CT image, we first extract the patch from the center location of the nodule and segment the lung nodule region. We propose to use Otsu method followed by morphological operations for the segmentation. This step enables accurate segmentation due to the use of data-driven threshold. Unlike other methods, we perform the segmentation without using the complete contour information of the nodule. In the second step, a deep convolutional neural network (CNN) is used for the better classification (malignant or benign) of the nodule present in the segmented patch. Accurate segmentation of even a tiny nodule followed by better classification using deep CNN enables the early detection of lung cancer. Experiments have been conducted using 6306 CT images of LIDC-IDRI database. We achieved the test accuracy of 84.13%, with the sensitivity and specificity of 91.69% and 73.16%, respectively, clearly outperforming the state-of-the-art algorithms.

  3. Visibility Enhancement of Scene Images Degraded by Foggy Weather Conditions with Deep Neural Networks

    Directory of Open Access Journals (Sweden)

    Farhan Hussain

    2016-01-01

    Full Text Available Nowadays many camera-based advanced driver assistance systems (ADAS have been introduced to assist the drivers and ensure their safety under various driving conditions. One of the problems faced by drivers is the faded scene visibility and lower contrast while driving in foggy conditions. In this paper, we present a novel approach to provide a solution to this problem by employing deep neural networks. We assume that the fog in an image can be mathematically modeled by an unknown complex function and we utilize the deep neural network to approximate the corresponding mathematical model for the fog. The advantages of our technique are as follows: (i its real-time operation and (ii being based on minimal input, that is, a single image, and exhibiting robustness/generalization for various unseen image data. Experiments carried out on various synthetic images indicate that our proposed technique has the abilities to approximate the corresponding fog function reasonably and remove it for better visibility and safety.

  4. Deep Ly alpha imaging of two z=2.04 GRB host galaxy fields

    DEFF Research Database (Denmark)

    Fynbo, J.P.U.; Møller, Per; Thomsen, Bente

    2002-01-01

    We report on the results of deep narrow-band Lyalpha and broad-band U and I imaging of the fields of two Gamma-Ray bursts at redshift z = 2.04 (GRB 000301C and GRB 000926). We find that the host galaxy of GRB 000926 is an extended (more than 2 arcsec), strong Lyalpha emitter with a rest-frame equ......We report on the results of deep narrow-band Lyalpha and broad-band U and I imaging of the fields of two Gamma-Ray bursts at redshift z = 2.04 (GRB 000301C and GRB 000926). We find that the host galaxy of GRB 000926 is an extended (more than 2 arcsec), strong Lyalpha emitter with a rest...... - I colour than the eastern component, suggesting the presence of at least some dust. We do not detect the host galaxy of GRB 000301C in neither Lyalpha emission nor in U and I broad-band images. The strongest limit comes from combining the narrow and U-band imaging where we infer a limit of U...

  5. Deep supervised dictionary learning for no-reference image quality assessment

    Science.gov (United States)

    Huang, Yuge; Liu, Xuesong; Tian, Xiang; Zhou, Fan; Chen, Yaowu; Jiang, Rongxin

    2018-03-01

    We propose a deep convolutional neural network (CNN) for general no-reference image quality assessment (NR-IQA), i.e., accurate prediction of image quality without a reference image. The proposed model consists of three components such as a local feature extractor that is a fully CNN, an encoding module with an inherent dictionary that aggregates local features to output a fixed-length global quality-aware image representation, and a regression module that maps the representation to an image quality score. Our model can be trained in an end-to-end manner, and all of the parameters, including the weights of the convolutional layers, the dictionary, and the regression weights, are simultaneously learned from the loss function. In addition, the model can predict quality scores for input images of arbitrary sizes in a single step. We tested our method on commonly used image quality databases and showed that its performance is comparable with that of state-of-the-art general-purpose NR-IQA algorithms.

  6. Distinguishing Computer-Generated Graphics from Natural Images Based on Sensor Pattern Noise and Deep Learning

    Directory of Open Access Journals (Sweden)

    Ye Yao

    2018-04-01

    Full Text Available Computer-generated graphics (CGs are images generated by computer software. The rapid development of computer graphics technologies has made it easier to generate photorealistic computer graphics, and these graphics are quite difficult to distinguish from natural images (NIs with the naked eye. In this paper, we propose a method based on sensor pattern noise (SPN and deep learning to distinguish CGs from NIs. Before being fed into our convolutional neural network (CNN-based model, these images—CGs and NIs—are clipped into image patches. Furthermore, three high-pass filters (HPFs are used to remove low-frequency signals, which represent the image content. These filters are also used to reveal the residual signal as well as SPN introduced by the digital camera device. Different from the traditional methods of distinguishing CGs from NIs, the proposed method utilizes a five-layer CNN to classify the input image patches. Based on the classification results of the image patches, we deploy a majority vote scheme to obtain the classification results for the full-size images. The experiments have demonstrated that (1 the proposed method with three HPFs can achieve better results than that with only one HPF or no HPF and that (2 the proposed method with three HPFs achieves 100% accuracy, although the NIs undergo a JPEG compression with a quality factor of 75.

  7. Automatic classification of ovarian cancer types from cytological images using deep convolutional neural networks.

    Science.gov (United States)

    Wu, Miao; Yan, Chuanbo; Liu, Huiqiang; Liu, Qian

    2018-06-29

    Ovarian cancer is one of the most common gynecologic malignancies. Accurate classification of ovarian cancer types (serous carcinoma, mucous carcinoma, endometrioid carcinoma, transparent cell carcinoma) is an essential part in the different diagnosis. Computer-aided diagnosis (CADx) can provide useful advice for pathologists to determine the diagnosis correctly. In our study, we employed a Deep Convolutional Neural Networks (DCNN) based on AlexNet to automatically classify the different types of ovarian cancers from cytological images. The DCNN consists of five convolutional layers, three max pooling layers, and two full reconnect layers. Then we trained the model by two group input data separately, one was original image data and the other one was augmented image data including image enhancement and image rotation. The testing results are obtained by the method of 10-fold cross-validation, showing that the accuracy of classification models has been improved from 72.76 to 78.20% by using augmented images as training data. The developed scheme was useful for classifying ovarian cancers from cytological images. © 2018 The Author(s).

  8. Fifty years of computer analysis in chest imaging: rule-based, machine learning, deep learning.

    Science.gov (United States)

    van Ginneken, Bram

    2017-03-01

    Half a century ago, the term "computer-aided diagnosis" (CAD) was introduced in the scientific literature. Pulmonary imaging, with chest radiography and computed tomography, has always been one of the focus areas in this field. In this study, I describe how machine learning became the dominant technology for tackling CAD in the lungs, generally producing better results than do classical rule-based approaches, and how the field is now rapidly changing: in the last few years, we have seen how even better results can be obtained with deep learning. The key differences among rule-based processing, machine learning, and deep learning are summarized and illustrated for various applications of CAD in the chest.

  9. Intracluster light in clusters of galaxies at redshifts 0.4 < z < 0.8

    Science.gov (United States)

    Guennou, L.; Adami, C.; Da Rocha, C.; Durret, F.; Ulmer, M. P.; Allam, S.; Basa, S.; Benoist, C.; Biviano, A.; Clowe, D.; Gavazzi, R.; Halliday, C.; Ilbert, O.; Johnston, D.; Just, D.; Kron, R.; Kubo, J. M.; Le Brun, V.; Marshall, P.; Mazure, A.; Murphy, K. J.; Pereira, D. N. E.; Rabaça, C. R.; Rostagni, F.; Rudnick, G.; Russeil, D.; Schrabback, T.; Slezak, E.; Tucker, D.; Zaritsky, D.

    2012-01-01

    Context. The study of intracluster light (ICL) can help us to understand the mechanisms taking place in galaxy clusters, and to place constraints on the cluster formation history and physical properties. However, owing to the intrinsic faintness of ICL emission, most searches and detailed studies of ICL have been limited to redshifts z DAFT/FADA Survey. Methods: We analyze the ICL by applying the OV WAV package, a wavelet-based technique, to deep HST ACS images in the F814W filter and to V-band VLT/FORS2 images of three clusters. Detection levels are assessed as a function of the diffuse light source surface brightness using simulations. Results: In the F814W filter images, we detect diffuse light sources in all the clusters, with typical sizes of a few tens of kpc (assuming that they are at the cluster redshifts). The ICL detected by stacking the ten F814W images shows an 8σ detection in the source center extending over a ~50 × 50 kpc2 area, with a total absolute magnitude of -21.6 in the F814W filter, equivalent to about two L∗ galaxies per cluster. We find a weak correlation between the total F814W absolute magnitude of the ICL and the cluster velocity dispersion and mass. There is no apparent correlation between the cluster mass-to-light ratio (M/L) and the amount of ICL, and no evidence of any preferential orientation in the ICL source distribution. We find no strong variation in the amount of ICL between z = 0 and z = 0.8. In addition, we find wavelet-detected compact objects (WDCOs) in the three clusters for which data in two bands are available; these objects are probably very faint compact galaxies that in some cases are members of the respective clusters and comparable to the faint dwarf galaxies of the Local Group. Conclusions: We show that the ICL is prevalent in clusters at least up to redshift z = 0.8. In the future, we propose to detect the ICL at even higher redshifts, to determine wether there is a particular stage of cluster evolution where it

  10. A novel end-to-end classifier using domain transferred deep convolutional neural networks for biomedical images.

    Science.gov (United States)

    Pang, Shuchao; Yu, Zhezhou; Orgun, Mehmet A

    2017-03-01

    Highly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning. We first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works. With the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches. We propose a robust

  11. Deep learning for the detection of barchan dunes in satellite images

    Science.gov (United States)

    Azzaoui, A. M.; Adnani, M.; Elbelrhiti, H.; Chaouki, B. E. K.; Masmoudi, L.

    2017-12-01

    Barchan dunes are known to be the fastest moving sand dunes in deserts as they form under unidirectional winds and limited sand supply over a firm coherent basement (Elbelrhiti and Hargitai,2015). They were studied in the context of natural hazard monitoring as they could be a threat to human activities and infrastructures. Also, they were studied as a natural phenomenon occurring in other planetary landforms such as Mars or Venus (Bourke et al., 2010). Our region of interest was located in a desert region in the south of Morocco, in a barchan dunes corridor next to the town of Tarfaya. This region which is part of the Sahara desert contained thousands of barchans; which limits the number of dunes that could be studied during field missions. Therefore, we chose to monitor barchan dunes with satellite imagery, which can be seen as a complementary approach to field missions. We collected data from the Sentinel platform (https://scihub.copernicus.eu/dhus/); we used a machine learning method as a basis for the detection of barchan dunes positions in the satellite image. We trained a deep learning model on a mid-sized dataset that contained blocks representing images of barchan dunes, and images of other desert features, that we collected by cropping and annotating the source image. During testing, we browsed the satellite image with a gliding window that evaluated each block, and then produced a probability map. Finally, a threshold on the latter map exposed the location of barchan dunes. We used a subsample of data to train the model and we gradually incremented the size of the training set to get finer results and avoid over fitting. The positions of barchan dunes were successfully detected and deep learning was an effective method for this application. Sentinel-2 images were chosen for their availability and good temporal resolution, which will allow the tracking of barchan dunes in future work. While Sentinel images had sufficient spatial resolution for the

  12. Deep Arm/Ear-ECG Image Learning for Highly Wearable Biometric Human Identification.

    Science.gov (United States)

    Zhang, Qingxue; Zhou, Dian

    2018-01-01

    In this study, to advance smart health applications which have increasing security/privacy requirements, we propose a novel highly wearable ECG-based user identification system, empowered by both non-standard convenient ECG lead configurations and deep learning techniques. Specifically, to achieve a super wearability, we suggest situating all the ECG electrodes on the left upper-arm, or behind the ears, and successfully obtain weak but distinguishable ECG waveforms. Afterwards, to identify individuals from weak ECG, we further present a two-stage framework, including ECG imaging and deep feature learning/identification. In the former stage, the ECG heartbeats are projected to a 2D state space, to reveal heartbeats' trajectory behaviors and produce 2D images by a split-then-hit method. In the second stage, a convolutional neural network is introduced to automatically learn the intricate patterns directly from the ECG image representations without heavy feature engineering, and then perform user identification. Experimental results on two acquired datasets using our wearable prototype, show a promising identification rate of 98.4% (single-arm-ECG) and 91.1% (ear-ECG), respectively. To the best of our knowledge, it is the first study on the feasibility of using single-arm-ECG/ear-ECG for user identification purpose, which is expected to contribute to pervasive ECG-based user identification in smart health applications.

  13. HEp-2 cell image classification method based on very deep convolutional networks with small datasets

    Science.gov (United States)

    Lu, Mengchi; Gao, Long; Guo, Xifeng; Liu, Qiang; Yin, Jianping

    2017-07-01

    Human Epithelial-2 (HEp-2) cell images staining patterns classification have been widely used to identify autoimmune diseases by the anti-Nuclear antibodies (ANA) test in the Indirect Immunofluorescence (IIF) protocol. Because manual test is time consuming, subjective and labor intensive, image-based Computer Aided Diagnosis (CAD) systems for HEp-2 cell classification are developing. However, methods proposed recently are mostly manual features extraction with low accuracy. Besides, the scale of available benchmark datasets is small, which does not exactly suitable for using deep learning methods. This issue will influence the accuracy of cell classification directly even after data augmentation. To address these issues, this paper presents a high accuracy automatic HEp-2 cell classification method with small datasets, by utilizing very deep convolutional networks (VGGNet). Specifically, the proposed method consists of three main phases, namely image preprocessing, feature extraction and classification. Moreover, an improved VGGNet is presented to address the challenges of small-scale datasets. Experimental results over two benchmark datasets demonstrate that the proposed method achieves superior performance in terms of accuracy compared with existing methods.

  14. Deep convective cloud characterizations from both broadband imager and hyperspectral infrared sounder measurements

    Science.gov (United States)

    Ai, Yufei; Li, Jun; Shi, Wenjing; Schmit, Timothy J.; Cao, Changyong; Li, Wanbiao

    2017-02-01

    Deep convective storms have contributed to airplane accidents, making them a threat to aviation safety. The most common method to identify deep convective clouds (DCCs) is using the brightness temperature difference (BTD) between the atmospheric infrared (IR) window band and the water vapor (WV) absorption band. The effectiveness of the BTD method for DCC detection is highly related to the spectral resolution and signal-to-noise ratio (SNR) of the WV band. In order to understand the sensitivity of BTD to spectral resolution and SNR for DCC detection, a BTD to noise ratio method using the difference between the WV and IR window radiances is developed to assess the uncertainty of DCC identification for different instruments. We examined the case of AirAsia Flight QZ8501. The brightness temperatures (Tbs) over DCCs from this case are simulated for BTD sensitivity studies by a fast forward radiative transfer model with an opaque cloud assumption for both broadband imager (e.g., Multifunction Transport Satellite imager, MTSAT-2 imager) and hyperspectral IR sounder (e.g., Atmospheric Infrared Sounder) instruments; we also examined the relationship between the simulated Tb and the cloud top height. Results show that despite the coarser spatial resolution, BTDs measured by a hyperspectral IR sounder are much more sensitive to high cloud tops than broadband BTDs. As demonstrated in this study, a hyperspectral IR sounder can identify DCCs with better accuracy.

  15. Automatic detection of kidney in 3D pediatric ultrasound images using deep neural networks

    Science.gov (United States)

    Tabrizi, Pooneh R.; Mansoor, Awais; Biggs, Elijah; Jago, James; Linguraru, Marius George

    2018-02-01

    Ultrasound (US) imaging is the routine and safe diagnostic modality for detecting pediatric urology problems, such as hydronephrosis in the kidney. Hydronephrosis is the swelling of one or both kidneys because of the build-up of urine. Early detection of hydronephrosis can lead to a substantial improvement in kidney health outcomes. Generally, US imaging is a challenging modality for the evaluation of pediatric kidneys with different shape, size, and texture characteristics. The aim of this study is to present an automatic detection method to help kidney analysis in pediatric 3DUS images. The method localizes the kidney based on its minimum volume oriented bounding box) using deep neural networks. Separate deep neural networks are trained to estimate the kidney position, orientation, and scale, making the method computationally efficient by avoiding full parameter training. The performance of the method was evaluated using a dataset of 45 kidneys (18 normal and 27 diseased kidneys diagnosed with hydronephrosis) through the leave-one-out cross validation method. Quantitative results show the proposed detection method could extract the kidney position, orientation, and scale ratio with root mean square values of 1.3 +/- 0.9 mm, 6.34 +/- 4.32 degrees, and 1.73 +/- 0.04, respectively. This method could be helpful in automating kidney segmentation for routine clinical evaluation.

  16. Deep learning for automatic localization, identification, and segmentation of vertebral bodies in volumetric MR images

    Science.gov (United States)

    Suzani, Amin; Rasoulian, Abtin; Seitel, Alexander; Fels, Sidney; Rohling, Robert N.; Abolmaesumi, Purang

    2015-03-01

    This paper proposes an automatic method for vertebra localization, labeling, and segmentation in multi-slice Magnetic Resonance (MR) images. Prior work in this area on MR images mostly requires user interaction while our method is fully automatic. Cubic intensity-based features are extracted from image voxels. A deep learning approach is used for simultaneous localization and identification of vertebrae. The localized points are refined by local thresholding in the region of the detected vertebral column. Thereafter, a statistical multi-vertebrae model is initialized on the localized vertebrae. An iterative Expectation Maximization technique is used to register the vertebral body of the model to the image edges and obtain a segmentation of the lumbar vertebral bodies. The method is evaluated by applying to nine volumetric MR images of the spine. The results demonstrate 100% vertebra identification and a mean surface error of below 2.8 mm for 3D segmentation. Computation time is less than three minutes per high-resolution volumetric image.

  17. A Deep Convolutional Coupling Network for Change Detection Based on Heterogeneous Optical and Radar Images.

    Science.gov (United States)

    Liu, Jia; Gong, Maoguo; Qin, Kai; Zhang, Puzhao

    2018-03-01

    We propose an unsupervised deep convolutional coupling network for change detection based on two heterogeneous images acquired by optical sensors and radars on different dates. Most existing change detection methods are based on homogeneous images. Due to the complementary properties of optical and radar sensors, there is an increasing interest in change detection based on heterogeneous images. The proposed network is symmetric with each side consisting of one convolutional layer and several coupling layers. The two input images connected with the two sides of the network, respectively, are transformed into a feature space where their feature representations become more consistent. In this feature space, the different map is calculated, which then leads to the ultimate detection map by applying a thresholding algorithm. The network parameters are learned by optimizing a coupling function. The learning process is unsupervised, which is different from most existing change detection methods based on heterogeneous images. Experimental results on both homogenous and heterogeneous images demonstrate the promising performance of the proposed network compared with several existing approaches.

  18. PyDBS: an automated image processing workflow for deep brain stimulation surgery.

    Science.gov (United States)

    D'Albis, Tiziano; Haegelen, Claire; Essert, Caroline; Fernández-Vidal, Sara; Lalys, Florent; Jannin, Pierre

    2015-02-01

    Deep brain stimulation (DBS) is a surgical procedure for treating motor-related neurological disorders. DBS clinical efficacy hinges on precise surgical planning and accurate electrode placement, which in turn call upon several image processing and visualization tasks, such as image registration, image segmentation, image fusion, and 3D visualization. These tasks are often performed by a heterogeneous set of software tools, which adopt differing formats and geometrical conventions and require patient-specific parameterization or interactive tuning. To overcome these issues, we introduce in this article PyDBS, a fully integrated and automated image processing workflow for DBS surgery. PyDBS consists of three image processing pipelines and three visualization modules assisting clinicians through the entire DBS surgical workflow, from the preoperative planning of electrode trajectories to the postoperative assessment of electrode placement. The system's robustness, speed, and accuracy were assessed by means of a retrospective validation, based on 92 clinical cases. The complete PyDBS workflow achieved satisfactory results in 92 % of tested cases, with a median processing time of 28 min per patient. The results obtained are compatible with the adoption of PyDBS in clinical practice.

  19. Application of Deep Networks to Oil Spill Detection Using Polarimetric Synthetic Aperture Radar Images

    Directory of Open Access Journals (Sweden)

    Guandong Chen

    2017-09-01

    Full Text Available Polarimetric synthetic aperture radar (SAR remote sensing provides an outstanding tool in oil spill detection and classification, for its advantages in distinguishing mineral oil and biogenic lookalikes. Various features can be extracted from polarimetric SAR data. The large number and correlated nature of polarimetric SAR features make the selection and optimization of these features impact on the performance of oil spill classification algorithms. In this paper, deep learning algorithms such as the stacked autoencoder (SAE and deep belief network (DBN are applied to optimize the polarimetric feature sets and reduce the feature dimension through layer-wise unsupervised pre-training. An experiment was conducted on RADARSAT-2 quad-polarimetric SAR image acquired during the Norwegian oil-on-water exercise of 2011, in which verified mineral, emulsions, and biogenic slicks were analyzed. The results show that oil spill classification achieved by deep networks outperformed both support vector machine (SVM and traditional artificial neural networks (ANN with similar parameter settings, especially when the number of training data samples is limited.

  20. Combining deep learning and coherent anti-Stokes Raman scattering imaging for automated differential diagnosis of lung cancer

    Science.gov (United States)

    Weng, Sheng; Xu, Xiaoyun; Li, Jiasong; Wong, Stephen T. C.

    2017-10-01

    Lung cancer is the most prevalent type of cancer and the leading cause of cancer-related deaths worldwide. Coherent anti-Stokes Raman scattering (CARS) is capable of providing cellular-level images and resolving pathologically related features on human lung tissues. However, conventional means of analyzing CARS images requires extensive image processing, feature engineering, and human intervention. This study demonstrates the feasibility of applying a deep learning algorithm to automatically differentiate normal and cancerous lung tissue images acquired by CARS. We leverage the features learned by pretrained deep neural networks and retrain the model using CARS images as the input. We achieve 89.2% accuracy in classifying normal, small-cell carcinoma, adenocarcinoma, and squamous cell carcinoma lung images. This computational method is a step toward on-the-spot diagnosis of lung cancer and can be further strengthened by the efforts aimed at miniaturizing the CARS technique for fiber-based microendoscopic imaging.

  1. Spectral characterization in deep UV of an improved imaging KDP acousto-optic tunable filter

    International Nuclear Information System (INIS)

    Gupta, Neelam; Voloshinov, Vitaly

    2014-01-01

    Recently, we developed a number of high quality noncollinear acousto-optic tunable filter (AOTF) cells in different birefringent materials with UV imaging capability. Cells based on a single crystal of KDP (potassium dihydrophosphate) had the best transmission efficiency and the optical throughput needed to acquire high quality spectral images at wavelengths above 220 nm. One of the main limitations of these imaging filters was their small angular aperture in air, limited to about 1.0°. In this paper, we describe an improved imaging KDP AOTF operating from the deep UV to the visible region of the spectrum. The linear and angular apertures of the new filter are 10 × 10 mm 2 and 1.8°, respectively. The spectral tuning range is 205–430 nm with a 60 cm −1 spectral resolution. We describe the filter and present experimental results on imaging using both a broadband source and a number of light emitting diodes (LEDs) in the UV, and include the measured spectra of these LEDs obtained with a collinear SiO 2 filter-based spectrometer operating above 255 nm. (paper)

  2. Energy-Looping Nanoparticles: Harnessing Excited-State Absorption for Deep-Tissue Imaging.

    Science.gov (United States)

    Levy, Elizabeth S; Tajon, Cheryl A; Bischof, Thomas S; Iafrati, Jillian; Fernandez-Bravo, Angel; Garfield, David J; Chamanzar, Maysamreza; Maharbiz, Michel M; Sohal, Vikaas S; Schuck, P James; Cohen, Bruce E; Chan, Emory M

    2016-09-27

    Near infrared (NIR) microscopy enables noninvasive imaging in tissue, particularly in the NIR-II spectral range (1000-1400 nm) where attenuation due to tissue scattering and absorption is minimized. Lanthanide-doped upconverting nanocrystals are promising deep-tissue imaging probes due to their photostable emission in the visible and NIR, but these materials are not efficiently excited at NIR-II wavelengths due to the dearth of lanthanide ground-state absorption transitions in this window. Here, we develop a class of lanthanide-doped imaging probes that harness an energy-looping mechanism that facilitates excitation at NIR-II wavelengths, such as 1064 nm, that are resonant with excited-state absorption transitions but not ground-state absorption. Using computational methods and combinatorial screening, we have identified Tm(3+)-doped NaYF4 nanoparticles as efficient looping systems that emit at 800 nm under continuous-wave excitation at 1064 nm. Using this benign excitation with standard confocal microscopy, energy-looping nanoparticles (ELNPs) are imaged in cultured mammalian cells and through brain tissue without autofluorescence. The 1 mm imaging depths and 2 μm feature sizes are comparable to those demonstrated by state-of-the-art multiphoton techniques, illustrating that ELNPs are a promising class of NIR probes for high-fidelity visualization in cells and tissue.

  3. Planetary Radar Imaging with the Deep-Space Network's 34 Meter Uplink Array

    Science.gov (United States)

    Vilnrotter, Victor; Tsao, P.; Lee, D.; Cornish, T.; Jao, J.; Slade, M.

    2011-01-01

    A coherent Uplink Array consisting of two or three 34-meter antennas of NASA's Deep Space Network has been developed for the primary purpose of increasing EIRP at the spacecraft. Greater EIRP ensures greater reach, higher uplink data rates for command and configuration control, as well as improved search and recovery capabilities during spacecraft emergencies. It has been conjectured that Doppler-delay radar imaging of lunar targets can be extended to planetary imaging, where the long baseline of the uplink array can provide greater resolution than a single antenna, as well as potentially higher EIRP. However, due to the well known R4 loss in radar links, imaging of distant planets is a very challenging endeavor, requiring accurate phasing of the Uplink Array antennas, cryogenically cooled low-noise receiver amplifiers, and sophisticated processing of the received data to extract the weak echoes characteristic of planetary radar. This article describes experiments currently under way to image the planets Mercury and Venus, highlights improvements in equipment and techniques, and presents planetary images obtained to date with two 34 meter antennas configured as a coherently phased Uplink Array.

  4. Deep Convolutional Neural Networks for Classifying Body Constitution Based on Face Image.

    Science.gov (United States)

    Huan, Er-Yang; Wen, Gui-Hua; Zhang, Shi-Jun; Li, Dan-Yang; Hu, Yang; Chang, Tian-Yuan; Wang, Qing; Huang, Bing-Lin

    2017-01-01

    Body constitution classification is the basis and core content of traditional Chinese medicine constitution research. It is to extract the relevant laws from the complex constitution phenomenon and finally build the constitution classification system. Traditional identification methods have the disadvantages of inefficiency and low accuracy, for instance, questionnaires. This paper proposed a body constitution recognition algorithm based on deep convolutional neural network, which can classify individual constitution types according to face images. The proposed model first uses the convolutional neural network to extract the features of face image and then combines the extracted features with the color features. Finally, the fusion features are input to the Softmax classifier to get the classification result. Different comparison experiments show that the algorithm proposed in this paper can achieve the accuracy of 65.29% about the constitution classification. And its performance was accepted by Chinese medicine practitioners.

  5. Accurate Classification of Protein Subcellular Localization from High-Throughput Microscopy Images Using Deep Learning

    Directory of Open Access Journals (Sweden)

    Tanel Pärnamaa

    2017-05-01

    Full Text Available High-throughput microscopy of many single cells generates high-dimensional data that are far from straightforward to analyze. One important problem is automatically detecting the cellular compartment where a fluorescently-tagged protein resides, a task relatively simple for an experienced human, but difficult to automate on a computer. Here, we train an 11-layer neural network on data from mapping thousands of yeast proteins, achieving per cell localization classification accuracy of 91%, and per protein accuracy of 99% on held-out images. We confirm that low-level network features correspond to basic image characteristics, while deeper layers separate localization classes. Using this network as a feature calculator, we train standard classifiers that assign proteins to previously unseen compartments after observing only a small number of training examples. Our results are the most accurate subcellular localization classifications to date, and demonstrate the usefulness of deep learning for high-throughput microscopy.

  6. Accurate Classification of Protein Subcellular Localization from High-Throughput Microscopy Images Using Deep Learning.

    Science.gov (United States)

    Pärnamaa, Tanel; Parts, Leopold

    2017-05-05

    High-throughput microscopy of many single cells generates high-dimensional data that are far from straightforward to analyze. One important problem is automatically detecting the cellular compartment where a fluorescently-tagged protein resides, a task relatively simple for an experienced human, but difficult to automate on a computer. Here, we train an 11-layer neural network on data from mapping thousands of yeast proteins, achieving per cell localization classification accuracy of 91%, and per protein accuracy of 99% on held-out images. We confirm that low-level network features correspond to basic image characteristics, while deeper layers separate localization classes. Using this network as a feature calculator, we train standard classifiers that assign proteins to previously unseen compartments after observing only a small number of training examples. Our results are the most accurate subcellular localization classifications to date, and demonstrate the usefulness of deep learning for high-throughput microscopy. Copyright © 2017 Parnamaa and Parts.

  7. Real-time magnetic resonance imaging of deep venous flow during muscular exercise-preliminary experience.

    Science.gov (United States)

    Joseph, Arun Antony; Merboldt, Klaus-Dietmar; Voit, Dirk; Dahm, Johannes; Frahm, Jens

    2016-12-01

    The accurate assessment of peripheral venous flow is important for the early diagnosis and treatment of disorders such as deep-vein thrombosis (DVT) which is a major cause of post-thrombotic syndrome or even death due to pulmonary embolism. The aim of this work is to quantitatively determine blood flow in deep veins during rest and muscular exercise using a novel real-time magnetic resonance imaging (MRI) method for velocity-encoded phase-contrast (PC) MRI at high spatiotemporal resolution. Real-time PC MRI of eight healthy volunteers and one patient was performed at 3 Tesla (Prisma fit, Siemens, Erlangen, Germany) using a flexible 16-channel receive coil (Variety, NORAS, Hoechberg, Germany). Acquisitions were based on a highly undersampled radial FLASH sequence with image reconstruction by regularized nonlinear inversion at 0.5×0.5×6 mm 3 spatial resolution and 100 ms temporal resolution. Flow was assessed in two cross-sections of the lower leg at the level of the calf muscle and knee using a protocol of 10 s rest, 20 s flexion and extension of the foot, and 10 s rest. Quantitative analyses included through-plane flow in the right posterior tibial, right peroneal and popliteal vein (PC maps) as well as signal intensity changes due to flow and muscle movements (corresponding magnitude images). Real-time PC MRI successfully monitored the dynamics of venous flow at high spatiotemporal resolution and clearly demonstrated increased flow in deep veins in response to flexion and extension of the foot. In normal subjects, the maximum velocity (averaged across vessel lumen) during exercise was 9.4±5.7 cm·s -1 for the right peroneal vein, 8.5±4.6 cm·s -1 for the right posterior tibial vein and 17.8±5.8 cm·s -1 for the popliteal vein. The integrated flow volume per exercise (20 s) was 1.9, 1.6 and 50 mL (mean across subjects) for right peroneal, right posterior tibial and popliteal vein, respectively. A patient with DVT presented with peak flow velocities of only

  8. Status of backthinned AlGaN based focal plane arrays for deep-UV imaging

    Science.gov (United States)

    Reverchon, J.-L.; Lehoucq, G.; Truffer, J.-P.; Costard, E.; Frayssinet, E.; Semond, F.; Duboz, J.-Y.; Giuliani, A.; Réfrégiers, M.; Idir, M.

    2017-11-01

    The achievement of deep ultraviolet (UV) focal plane arrays (FPA) is required for both solar physics [1] and micro electronics industry. The success of solar mission (SOHO, STEREO [2], SDO [3]…), has shown the accuracy of imaging at wavelengths from 10 nm to 140 nm to reveal effects occurring in the sun corona. Deep UV steppers at 13 nm are another demanding imaging technology for the microelectronic industry in terms of uniformity and stability. A third application concerns beam shaping of Synchrotron lines [4]. Consequently, such wavelengths are of prime importance whereas the vacuum UV wavelengths are very difficult to detect due to the dramatic interaction of light with materials. The fast development of nitrides has given the opportunity to investigate AlGaN as a material for UV detection. Camera based on AlGaN present an intrinsic spectral selectivity and an extremely low dark current at room temperature. We have previously presented several FPA dedicated to deep UV based on 320 x 256 pixels of Schottky photodiodes with a pitch of 30 μm [4, 5]. AlGaN is grown on a silicon substrate instead of sapphire substrate only transparent down to 200 nm. After a flip-chip hybridization, silicon substrate and AlGaN basal layer was removed by dry etching. Then, the spectral responsivity of the FPA presented a quantum efficiency (QE) from 5% to 20% from 50 nm to 290 nm when removing the highly doped contact layer via a selective wet etching. This FPA suffered from a low uniformity incompatible with imaging, and a long time response due to variations of conductivity in the honeycomb. We also observed a low rejection of visible. It is probably due to the same honeycomb conductivity enhancement for wavelength shorter than 360 nm, i.e., the band gap of GaN. We will show hereafter an improved uniformity due to the use of a precisely ICP (Inductively Coupled Plasma) controlled process. The final membrane thickness is limited to the desertion layer. Neither access resistance

  9. Reflection imaging of the Moon's interior using deep-moonquake seismic interferometry

    Science.gov (United States)

    Nishitsuji, Yohei; Rowe, C. A.; Wapenaar, Kees; Draganov, Deyan

    2016-04-01

    The internal structure of the Moon has been investigated over many years using a variety of seismic methods, such as travel time analysis, receiver functions, and tomography. Here we propose to apply body-wave seismic interferometry to deep moonquakes in order to retrieve zero-offset reflection responses (and thus images) beneath the Apollo stations on the nearside of the Moon from virtual sources colocated with the stations. This method is called deep-moonquake seismic interferometry (DMSI). Our results show a laterally coherent acoustic boundary around 50 km depth beneath all four Apollo stations. We interpret this boundary as the lunar seismic Moho. This depth agrees with Japan Aerospace Exploration Agency's (JAXA) SELenological and Engineering Explorer (SELENE) result and previous travel time analysis at the Apollo 12/14 sites. The deeper part of the image we obtain from DMSI shows laterally incoherent structures. Such lateral inhomogeneity we interpret as representing a zone characterized by strong scattering and constant apparent seismic velocity at our resolution scale (0.2-2.0 Hz).

  10. Degradation of CMOS image sensors in deep-submicron technology due to γ-irradiation

    Science.gov (United States)

    Rao, Padmakumar R.; Wang, Xinyang; Theuwissen, Albert J. P.

    2008-09-01

    In this work, radiation induced damage mechanisms in deep submicron technology is resolved using finger gated-diodes (FGDs) as a radiation sensitive tool. It is found that these structures are simple yet efficient structures to resolve radiation induced damage in advanced CMOS processes. The degradation of the CMOS image sensors in deep-submicron technology due to γ-ray irradiation is studied by developing a model for the spectral response of the sensor and also by the dark-signal degradation as a function of STI (shallow-trench isolation) parameters. It is found that threshold shifts in the gate-oxide/silicon interface as well as minority carrier life-time variations in the silicon bulk are minimal. The top-layer material properties and the photodiode Si-SiO2 interface quality are degraded due to γ-ray irradiation. Results further suggest that p-well passivated structures are inevitable for radiation-hard designs. It was found that high electrical fields in submicron technologies pose a threat to high quality imaging in harsh environments.

  11. Human fatigue expression recognition through image-based dynamic multi-information and bimodal deep learning

    Science.gov (United States)

    Zhao, Lei; Wang, Zengcai; Wang, Xiaojin; Qi, Yazhou; Liu, Qing; Zhang, Guoxin

    2016-09-01

    Human fatigue is an important cause of traffic accidents. To improve the safety of transportation, we propose, in this paper, a framework for fatigue expression recognition using image-based facial dynamic multi-information and a bimodal deep neural network. First, the landmark of face region and the texture of eye region, which complement each other in fatigue expression recognition, are extracted from facial image sequences captured by a single camera. Then, two stacked autoencoder neural networks are trained for landmark and texture, respectively. Finally, the two trained neural networks are combined by learning a joint layer on top of them to construct a bimodal deep neural network. The model can be used to extract a unified representation that fuses landmark and texture modalities together and classify fatigue expressions accurately. The proposed system is tested on a human fatigue dataset obtained from an actual driving environment. The experimental results demonstrate that the proposed method performs stably and robustly, and that the average accuracy achieves 96.2%.

  12. DEEP KECK u-BAND IMAGING OF THE HUBBLE ULTRA DEEP FIELD: A CATALOG OF z ∼ 3 LYMAN BREAK GALAXIES

    International Nuclear Information System (INIS)

    Rafelski, Marc; Wolfe, Arthur M.; Cooke, Jeff; Chen, H.-W.; Armandroff, Taft E.; Wirth, Gregory D.

    2009-01-01

    We present a sample of 407 z ∼ 3 Lyman break galaxies (LBGs) to a limiting isophotal u-band magnitude of 27.6 mag in the Hubble Ultra Deep Field. The LBGs are selected using a combination of photometric redshifts and the u-band drop-out technique enabled by the introduction of an extremely deep u-band image obtained with the Keck I telescope and the blue channel of the Low Resolution Imaging Spectrometer. The Keck u-band image, totaling 9 hr of integration time, has a 1σ depth of 30.7 mag arcsec -2 , making it one of the most sensitive u-band images ever obtained. The u-band image also substantially improves the accuracy of photometric redshift measurements of ∼50% of the z ∼ 3 LBGs, significantly reducing the traditional degeneracy of colors between z ∼ 3 and z ∼ 0.2 galaxies. This sample provides the most sensitive, high-resolution multi-filter imaging of reliably identified z ∼ 3 LBGs for morphological studies of galaxy formation and evolution and the star formation efficiency of gas at high redshift.

  13. A comparative study of deep learning models for medical image classification

    Science.gov (United States)

    Dutta, Suvajit; Manideep, B. C. S.; Rai, Shalva; Vijayarajan, V.

    2017-11-01

    Deep Learning(DL) techniques are conquering over the prevailing traditional approaches of neural network, when it comes to the huge amount of dataset, applications requiring complex functions demanding increase accuracy with lower time complexities. Neurosciences has already exploited DL techniques, thus portrayed itself as an inspirational source for researchers exploring the domain of Machine learning. DL enthusiasts cover the areas of vision, speech recognition, motion planning and NLP as well, moving back and forth among fields. This concerns with building models that can successfully solve variety of tasks requiring intelligence and distributed representation. The accessibility to faster CPUs, introduction of GPUs-performing complex vector and matrix computations, supported agile connectivity to network. Enhanced software infrastructures for distributed computing worked in strengthening the thought that made researchers suffice DL methodologies. The paper emphases on the following DL procedures to traditional approaches which are performed manually for classifying medical images. The medical images are used for the study Diabetic Retinopathy(DR) and computed tomography (CT) emphysema data. Both DR and CT data diagnosis is difficult task for normal image classification methods. The initial work was carried out with basic image processing along with K-means clustering for identification of image severity levels. After determining image severity levels ANN has been applied on the data to get the basic classification result, then it is compared with the result of DNNs (Deep Neural Networks), which performed efficiently because of its multiple hidden layer features basically which increases accuracy factors, but the problem of vanishing gradient in DNNs made to consider Convolution Neural Networks (CNNs) as well for better results. The CNNs are found to be providing better outcomes when compared to other learning models aimed at classification of images. CNNs are

  14. Deep inspiration breath-hold radiotherapy for lung cancer: impact on image quality and registration uncertainty in cone beam CT image guidance

    DEFF Research Database (Denmark)

    Josipovic, Mirjana; Persson, Gitte F; Bangsgaard, Jens Peter

    2016-01-01

    OBJECTIVE: We investigated the impact of deep inspiration breath-hold (DIBH) and tumour baseline shifts on image quality and registration uncertainty in image-guided DIBH radiotherapy (RT) for locally advanced lung cancer. METHODS: Patients treated with daily cone beam CT (CBCT)-guided free...

  15. Using Deep Learning Algorithm to Enhance Image-review Software for Surveillance Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Yonggang

    2018-05-07

    We propose the development of proven deep learning algorithms to flag objects and events of interest in Next Generation Surveillance System (NGSS) surveillance to make IAEA image review more efficient. Video surveillance is one of the core monitoring technologies used by the IAEA Department of Safeguards when implementing safeguards at nuclear facilities worldwide. The current image review software GARS has limited automated functions, such as scene-change detection, black image detection and missing scene analysis, but struggles with highly cluttered backgrounds. A cutting-edge algorithm to be developed in this project will enable efficient and effective searches in images and video streams by identifying and tracking safeguards relevant objects and detect anomalies in their vicinity. In this project, we will develop the algorithm, test it with the IAEA surveillance cameras and data sets collected at simulated nuclear facilities at BNL and SNL, and implement it in a software program for potential integration into the IAEA’s IRAP (Integrated Review and Analysis Program).

  16. Super-resolution for asymmetric resolution of FIB-SEM 3D imaging using AI with deep learning.

    Science.gov (United States)

    Hagita, Katsumi; Higuchi, Takeshi; Jinnai, Hiroshi

    2018-04-12

    Scanning electron microscopy equipped with a focused ion beam (FIB-SEM) is a promising three-dimensional (3D) imaging technique for nano- and meso-scale morphologies. In FIB-SEM, the specimen surface is stripped by an ion beam and imaged by an SEM installed orthogonally to the FIB. The lateral resolution is governed by the SEM, while the depth resolution, i.e., the FIB milling direction, is determined by the thickness of the stripped thin layer. In most cases, the lateral resolution is superior to the depth resolution; hence, asymmetric resolution is generated in the 3D image. Here, we propose a new approach based on an image-processing or deep-learning-based method for super-resolution of 3D images with such asymmetric resolution, so as to restore the depth resolution to achieve symmetric resolution. The deep-learning-based method learns from high-resolution sub-images obtained via SEM and recovers low-resolution sub-images parallel to the FIB milling direction. The 3D morphologies of polymeric nano-composites are used as test images, which are subjected to the deep-learning-based method as well as conventional methods. We find that the former yields superior restoration, particularly as the asymmetric resolution is increased. Our super-resolution approach for images having asymmetric resolution enables observation time reduction.

  17. Deep Interior Mission: Imaging the Interior of Near-Earth Asteroids Using Radio Reflection Tomography

    Science.gov (United States)

    Safaeinili, A.; Asphaug, E.; Belton, M.; Klaasen, K.; Ostro, S.; Plaut, J.; Yeomans, D.

    2004-12-01

    Near-Earth asteroids are important exploration targets since they provide clues to the evolution of the solar system. They are also of interest since they present a clear danger to Earth in the future. Our mission objective is to image the internal structure of two NEOs using radio reflection tomography (RRT), in order to explore the record of asteroid origin and impact evolution, and to test the fundamental hypothesis that these important members of the solar system are rubble piles rather than consolidated bodies. Our mission's RRT technique is analogous to doing a ``CAT scan" of the asteroid from orbit. Closely sampled radar echoes are processed to yield volumetric maps of mechanical and compositional boundaries, and measure interior material dielectric properties. The RRT instrument is a radar that operates at 5 and 15 MHz with two 30-m (tip-to-tip) dipole antennas that are used in a cross-dipole configuration. The radar transmitter and receiver electronics have heritage from JPL's MARSIS contribution to Mars Express, and the antenna is similar to systems used in IMAGE and LACE missions. The 5-MHz channel is designed to penetrate >1 km of basaltic rock, and 15-MHz penetrates a few hundred meters or more. In addition to RRT volumetric imaging, we use a redundant color cameras to explore the surface expressions of unit boundaries, in order to relate interior radar imaging to what is observable from spacecraft imaging and from Earth. The camera also yields stereo color imaging for geology and RRT-related compositional analysis. Gravity and high fidelity geodesy are used to explore how interior structure is expressed in shape, density, mass distribution and spin. Deep interior has two targets (S-type 1999 ND43 and V-type Nyx ) whose composition bracket the diversity of solar system materials that we are likely to encounter, and are richly complementary.

  18. Deep learning for tissue microarray image-based outcome prediction in patients with colorectal cancer

    Science.gov (United States)

    Bychkov, Dmitrii; Turkki, Riku; Haglund, Caj; Linder, Nina; Lundin, Johan

    2016-03-01

    Recent advances in computer vision enable increasingly accurate automated pattern classification. In the current study we evaluate whether a convolutional neural network (CNN) can be trained to predict disease outcome in patients with colorectal cancer based on images of tumor tissue microarray samples. We compare the prognostic accuracy of CNN features extracted from the whole, unsegmented tissue microarray spot image, with that of CNN features extracted from the epithelial and non-epithelial compartments, respectively. The prognostic accuracy of visually assessed histologic grade is used as a reference. The image data set consists of digitized hematoxylin-eosin (H and E) stained tissue microarray samples obtained from 180 patients with colorectal cancer. The patient samples represent a variety of histological grades, have data available on a series of clinicopathological variables including long-term outcome and ground truth annotations performed by experts. The CNN features extracted from images of the epithelial tissue compartment significantly predicted outcome (hazard ratio (HR) 2.08; CI95% 1.04-4.16; area under the curve (AUC) 0.66) in a test set of 60 patients, as compared to the CNN features extracted from unsegmented images (HR 1.67; CI95% 0.84-3.31, AUC 0.57) and visually assessed histologic grade (HR 1.96; CI95% 0.99-3.88, AUC 0.61). As a conclusion, a deep-learning classifier can be trained to predict outcome of colorectal cancer based on images of H and E stained tissue microarray samples and the CNN features extracted from the epithelial compartment only resulted in a prognostic discrimination comparable to that of visually determined histologic grade.

  19. Classification of the Clinical Images for Benign and Malignant Cutaneous Tumors Using a Deep Learning Algorithm.

    Science.gov (United States)

    Han, Seung Seog; Kim, Myoung Shin; Lim, Woohyung; Park, Gyeong Hun; Park, Ilwoo; Chang, Sung Eun

    2018-02-08

    We tested the use of a deep learning algorithm to classify the clinical images of 12 skin diseases-basal cell carcinoma, squamous cell carcinoma, intraepithelial carcinoma, actinic keratosis, seborrheic keratosis, malignant melanoma, melanocytic nevus, lentigo, pyogenic granuloma, hemangioma, dermatofibroma, and wart. The convolutional neural network (Microsoft ResNet-152 model; Microsoft Research Asia, Beijing, China) was fine-tuned with images from the training portion of the Asan dataset, MED-NODE dataset, and atlas site images (19,398 images in total). The trained model was validated with the testing portion of the Asan, Hallym and Edinburgh datasets. With the Asan dataset, the area under the curve for the diagnosis of basal cell carcinoma, squamous cell carcinoma, intraepithelial carcinoma, and melanoma was 0.96 ± 0.01, 0.83 ± 0.01, 0.82 ± 0.02, and 0.96 ± 0.00, respectively. With the Edinburgh dataset, the area under the curve for the corresponding diseases was 0.90 ± 0.01, 0.91 ± 0.01, 0.83 ± 0.01, and 0.88 ± 0.01, respectively. With the Hallym dataset, the sensitivity for basal cell carcinoma diagnosis was 87.1% ± 6.0%. The tested algorithm performance with 480 Asan and Edinburgh images was comparable to that of 16 dermatologists. To improve the performance of convolutional neural network, additional images with a broader range of ages and ethnicities should be collected. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Deep Constrained Siamese Hash Coding Network and Load-Balanced Locality-Sensitive Hashing for Near Duplicate Image Detection.

    Science.gov (United States)

    Hu, Weiming; Fan, Yabo; Xing, Junliang; Sun, Liang; Cai, Zhaoquan; Maybank, Stephen

    2018-09-01

    We construct a new efficient near duplicate image detection method using a hierarchical hash code learning neural network and load-balanced locality-sensitive hashing (LSH) indexing. We propose a deep constrained siamese hash coding neural network combined with deep feature learning. Our neural network is able to extract effective features for near duplicate image detection. The extracted features are used to construct a LSH-based index. We propose a load-balanced LSH method to produce load-balanced buckets in the hashing process. The load-balanced LSH significantly reduces the query time. Based on the proposed load-balanced LSH, we design an effective and feasible algorithm for near duplicate image detection. Extensive experiments on three benchmark data sets demonstrate the effectiveness of our deep siamese hash encoding network and load-balanced LSH.

  1. Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video

    Science.gov (United States)

    Li, Honggui

    2017-09-01

    This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.

  2. A resolution adaptive deep hierarchical (RADHicaL) learning scheme applied to nuclear segmentation of digital pathology images.

    Science.gov (United States)

    Janowczyk, Andrew; Doyle, Scott; Gilmore, Hannah; Madabhushi, Anant

    2018-01-01

    Deep learning (DL) has recently been successfully applied to a number of image analysis problems. However, DL approaches tend to be inefficient for segmentation on large image data, such as high-resolution digital pathology slide images. For example, typical breast biopsy images scanned at 40× magnification contain billions of pixels, of which usually only a small percentage belong to the class of interest. For a typical naïve deep learning scheme, parsing through and interrogating all the image pixels would represent hundreds if not thousands of hours of compute time using high performance computing environments. In this paper, we present a resolution adaptive deep hierarchical (RADHicaL) learning scheme wherein DL networks at lower resolutions are leveraged to determine if higher levels of magnification, and thus computation, are necessary to provide precise results. We evaluate our approach on a nuclear segmentation task with a cohort of 141 ER+ breast cancer images and show we can reduce computation time on average by about 85%. Expert annotations of 12,000 nuclei across these 141 images were employed for quantitative evaluation of RADHicaL. A head-to-head comparison with a naïve DL approach, operating solely at the highest magnification, yielded the following performance metrics: .9407 vs .9854 Detection Rate, .8218 vs .8489 F -score, .8061 vs .8364 true positive rate and .8822 vs 0.8932 positive predictive value. Our performance indices compare favourably with state of the art nuclear segmentation approaches for digital pathology images.

  3. Applying deep learning technology to automatically identify metaphase chromosomes using scanning microscopic images: an initial investigation

    Science.gov (United States)

    Qiu, Yuchen; Lu, Xianglan; Yan, Shiju; Tan, Maxine; Cheng, Samuel; Li, Shibo; Liu, Hong; Zheng, Bin

    2016-03-01

    Automated high throughput scanning microscopy is a fast developing screening technology used in cytogenetic laboratories for the diagnosis of leukemia or other genetic diseases. However, one of the major challenges of using this new technology is how to efficiently detect the analyzable metaphase chromosomes during the scanning process. The purpose of this investigation is to develop a computer aided detection (CAD) scheme based on deep learning technology, which can identify the metaphase chromosomes with high accuracy. The CAD scheme includes an eight layer neural network. The first six layers compose of an automatic feature extraction module, which has an architecture of three convolution-max-pooling layer pairs. The 1st, 2nd and 3rd pair contains 30, 20, 20 feature maps, respectively. The seventh and eighth layers compose of a multiple layer perception (MLP) based classifier, which is used to identify the analyzable metaphase chromosomes. The performance of new CAD scheme was assessed by receiver operation characteristic (ROC) method. A number of 150 regions of interest (ROIs) were selected to test the performance of our new CAD scheme. Each ROI contains either interphase cell or metaphase chromosomes. The results indicate that new scheme is able to achieve an area under the ROC curve (AUC) of 0.886+/-0.043. This investigation demonstrates that applying a deep learning technique may enable to significantly improve the accuracy of the metaphase chromosome detection using a scanning microscopic imaging technology in the future.

  4. Automatic segmentation of left ventricle in cardiac cine MRI images based on deep learning

    Science.gov (United States)

    Zhou, Tian; Icke, Ilknur; Dogdas, Belma; Parimal, Sarayu; Sampath, Smita; Forbes, Joseph; Bagchi, Ansuman; Chin, Chih-Liang; Chen, Antong

    2017-02-01

    In developing treatment of cardiovascular diseases, short axis cine MRI has been used as a standard technique for understanding the global structural and functional characteristics of the heart, e.g. ventricle dimensions, stroke volume and ejection fraction. To conduct an accurate assessment, heart structures need to be segmented from the cine MRI images with high precision, which could be a laborious task when performed manually. Herein a fully automatic framework is proposed for the segmentation of the left ventricle from the slices of short axis cine MRI scans of porcine subjects using a deep learning approach. For training the deep learning models, which generally requires a large set of data, a public database of human cine MRI scans is used. Experiments on the 3150 cine slices of 7 porcine subjects have shown that when comparing the automatic and manual segmentations the mean slice-wise Dice coefficient is about 0.930, the point-to-curve error is 1.07 mm, and the mean slice-wise Hausdorff distance is around 3.70 mm, which demonstrates the accuracy and robustness of the proposed inter-species translational approach.

  5. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.

    Science.gov (United States)

    Rueckauer, Bodo; Lungu, Iulia-Alexandra; Hu, Yuhuang; Pfeiffer, Michael; Liu, Shih-Chii

    2017-01-01

    Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.

  6. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification

    Directory of Open Access Journals (Sweden)

    Bodo Rueckauer

    2017-12-01

    Full Text Available Spiking neural networks (SNNs can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.

  7. A deep learning framework for supporting the classification of breast lesions in ultrasound images

    Science.gov (United States)

    Han, Seokmin; Kang, Ho-Kyung; Jeong, Ja-Yeon; Park, Moon-Ho; Kim, Wonsik; Bang, Won-Chul; Seong, Yeong-Kyeong

    2017-10-01

    In this research, we exploited the deep learning framework to differentiate the distinctive types of lesions and nodules in breast acquired with ultrasound imaging. A biopsy-proven benchmarking dataset was built from 5151 patients cases containing a total of 7408 ultrasound breast images, representative of semi-automatically segmented lesions associated with masses. The dataset comprised 4254 benign and 3154 malignant lesions. The developed method includes histogram equalization, image cropping and margin augmentation. The GoogLeNet convolutionary neural network was trained to the database to differentiate benign and malignant tumors. The networks were trained on the data with augmentation and the data without augmentation. Both of them showed an area under the curve of over 0.9. The networks showed an accuracy of about 0.9 (90%), a sensitivity of 0.86 and a specificity of 0.96. Although target regions of interest (ROIs) were selected by radiologists, meaning that radiologists still have to point out the location of the ROI, the classification of malignant lesions showed promising results. If this method is used by radiologists in clinical situations it can classify malignant lesions in a short time and support the diagnosis of radiologists in discriminating malignant lesions. Therefore, the proposed method can work in tandem with human radiologists to improve performance, which is a fundamental purpose of computer-aided diagnosis.

  8. Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning.

    Science.gov (United States)

    Kermany, Daniel S; Goldbaum, Michael; Cai, Wenjia; Valentim, Carolina C S; Liang, Huiying; Baxter, Sally L; McKeown, Alex; Yang, Ge; Wu, Xiaokang; Yan, Fangbing; Dong, Justin; Prasadha, Made K; Pei, Jacqueline; Ting, Magdalene Y L; Zhu, Jie; Li, Christina; Hewett, Sierra; Dong, Jason; Ziyar, Ian; Shi, Alexander; Zhang, Runze; Zheng, Lianghong; Hou, Rui; Shi, William; Fu, Xin; Duan, Yaou; Huu, Viet A N; Wen, Cindy; Zhang, Edward D; Zhang, Charlotte L; Li, Oulan; Wang, Xiaobo; Singer, Michael A; Sun, Xiaodong; Xu, Jie; Tafreshi, Ali; Lewis, M Anthony; Xia, Huimin; Zhang, Kang

    2018-02-22

    The implementation of clinical-decision support algorithms for medical imaging faces challenges with reliability and interpretability. Here, we establish a diagnostic tool based on a deep-learning framework for the screening of patients with common treatable blinding retinal diseases. Our framework utilizes transfer learning, which trains a neural network with a fraction of the data of conventional approaches. Applying this approach to a dataset of optical coherence tomography images, we demonstrate performance comparable to that of human experts in classifying age-related macular degeneration and diabetic macular edema. We also provide a more transparent and interpretable diagnosis by highlighting the regions recognized by the neural network. We further demonstrate the general applicability of our AI system for diagnosis of pediatric pneumonia using chest X-ray images. This tool may ultimately aid in expediting the diagnosis and referral of these treatable conditions, thereby facilitating earlier treatment, resulting in improved clinical outcomes. VIDEO ABSTRACT. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations

    Directory of Open Access Journals (Sweden)

    Liya Zhao

    2015-01-01

    Full Text Available This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs and then identifies scale and translation parameters separately. For three-dimensional (3D images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed.

  10. Magnetic resonance direct thrombus imaging differentiates acute recurrent ipsilateral deep vein thrombosis from residual thrombosis.

    Science.gov (United States)

    Tan, Melanie; Mol, Gerben C; van Rooden, Cornelis J; Klok, Frederikus A; Westerbeek, Robin E; Iglesias Del Sol, Antonio; van de Ree, Marcel A; de Roos, Albert; Huisman, Menno V

    2014-07-24

    Accurate diagnostic assessment of suspected ipsilateral recurrent deep vein thrombosis (DVT) is a major clinical challenge because differentiating between acute recurrent thrombosis and residual thrombosis is difficult with compression ultrasonography (CUS). We evaluated noninvasive magnetic resonance direct thrombus imaging (MRDTI) in a prospective study of 39 patients with symptomatic recurrent ipsilateral DVT (incompressibility of a different proximal venous segment than at the prior DVT) and 42 asymptomatic patients with at least 6-month-old chronic residual thrombi and normal D-dimer levels. All patients were subjected to MRDTI. MRDTI images were judged by 2 independent radiologists blinded for the presence of acute DVT and a third in case of disagreement. The sensitivity, specificity, and interobserver reliability of MRDTI were determined. MRDTI demonstrated acute recurrent ipsilateral DVT in 37 of 39 patients and was normal in all 42 patients without symptomatic recurrent disease for a sensitivity of 95% (95% CI, 83% to 99%) and a specificity of 100% (95% CI, 92% to 100%). Interobserver agreement was excellent (κ = 0.98). MRDTI images were adequate for interpretation in 95% of the cases. MRDTI is a sensitive and reproducible method for distinguishing acute ipsilateral recurrent DVT from 6-month-old chronic residual thrombi in the leg veins. © 2014 by The American Society of Hematology.

  11. Reconstruction of initial pressure from limited view photoacoustic images using deep learning

    Science.gov (United States)

    Waibel, Dominik; Gröhl, Janek; Isensee, Fabian; Kirchner, Thomas; Maier-Hein, Klaus; Maier-Hein, Lena

    2018-02-01

    Quantification of tissue properties with photoacoustic (PA) imaging typically requires a highly accurate representation of the initial pressure distribution in tissue. Almost all PA scanners reconstruct the PA image only from a partial scan of the emitted sound waves. Especially handheld devices, which have become increasingly popular due to their versatility and ease of use, only provide limited view data because of their geometry. Owing to such limitations in hardware as well as to the acoustic attenuation in tissue, state-of-the-art reconstruction methods deliver only approximations of the initial pressure distribution. To overcome the limited view problem, we present a machine learning-based approach to the reconstruction of initial pressure from limited view PA data. Our method involves a fully convolutional deep neural network based on a U-Net-like architecture with pixel-wise regression loss on the acquired PA images. It is trained and validated on in silico data generated with Monte Carlo simulations. In an initial study we found an increase in accuracy over the state-of-the-art when reconstructing simulated linear-array scans of blood vessels.

  12. Diagnosis of deep endometriosis: clinical examination, ultrasonography, magnetic resonance imaging, and other techniques.

    Science.gov (United States)

    Bazot, Marc; Daraï, Emile

    2017-12-01

    The aim of the present review was to evaluate the contribution of clinical examination and imaging techniques, mainly transvaginal sonography and magnetic resonance imaging (MRI) to diagnose deep infiltrating (DE) locations using prisma statement recommendations. Clinical examination has a relative low sensitivity and specificity to diagnose DE. Independently of DE locations, for all transvaginal sonography techniques a pooled sensitivity and specificity of 79% and 94% are observed approaching criteria for a triage test. Whatever the protocol and MRI devices, the pooled sensitivity and specificity for pelvic endometriosis diagnosis were 94% and 77%, respectively. For rectosigmoid endometriosis, pooled sensitivity and specificity of MRI were 92% and 96%, respectively fulfilling criteria of replacement test. In conclusion, advances in imaging techniques offer high sensitivity and specificity to diagnose DE with at least triage value and for rectosigmoid endometriosis replacement value imposing a revision of the concept of laparoscopy as the gold standard. Copyright © 2017 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  13. Optical Performance of Breadboard Amon-Ra Imaging Channel Instrument for Deep Space Albedo Measurement

    Directory of Open Access Journals (Sweden)

    Won Hyun Park

    2007-03-01

    Full Text Available The AmonRa instrument, the primary payload of the international EARTHSHINE mission, is designed for measurement of deep space albedo from L1 halo orbit. We report the optical design, tolerance analysis and the optical performance of the breadborad AmonRa imaging channel instrument optimized for the mission science requirements. In particular, an advanced wavefront feedback process control technique was used for the instrumentation process including part fabrication, system alignment and integration. The measured performances for the complete breadboard system are the RMS 0.091 wave(test wavelength: 632.8 nm in wavefront error, the ensquared energy of 61.7%(in 14 μ m and the MTF of 35.3%(Nyquist frequency: 35.7 mm^{-1} at the center field. These resulting optical system performances prove that the breadboard AmonRa instrument, as built, satisfies the science requirements of the EARTHSHINE mission.

  14. A Physics-Based Deep Learning Approach to Shadow Invariant Representations of Hyperspectral Images.

    Science.gov (United States)

    Windrim, Lloyd; Ramakrishnan, Rishi; Melkumyan, Arman; Murphy, Richard J

    2018-02-01

    This paper proposes the Relit Spectral Angle-Stacked Autoencoder, a novel unsupervised feature learning approach for mapping pixel reflectances to illumination invariant encodings. This work extends the Spectral Angle-Stacked Autoencoder so that it can learn a shadow-invariant mapping. The method is inspired by a deep learning technique, Denoising Autoencoders, with the incorporation of a physics-based model for illumination such that the algorithm learns a shadow invariant mapping without the need for any labelled training data, additional sensors, a priori knowledge of the scene or the assumption of Planckian illumination. The method is evaluated using datasets captured from several different cameras, with experiments to demonstrate the illumination invariance of the features and how they can be used practically to improve the performance of high-level perception algorithms that operate on images acquired outdoors.

  15. On the Multi-Modal Object Tracking and Image Fusion Using Unsupervised Deep Learning Methodologies

    Science.gov (United States)

    LaHaye, N.; Ott, J.; Garay, M. J.; El-Askary, H. M.; Linstead, E.

    2017-12-01

    The number of different modalities of remote-sensors has been on the rise, resulting in large datasets with different complexity levels. Such complex datasets can provide valuable information separately, yet there is a bigger value in having a comprehensive view of them combined. As such, hidden information can be deduced through applying data mining techniques on the fused data. The curse of dimensionality of such fused data, due to the potentially vast dimension space, hinders our ability to have deep understanding of them. This is because each dataset requires a user to have instrument-specific and dataset-specific knowledge for optimum and meaningful usage. Once a user decides to use multiple datasets together, deeper understanding of translating and combining these datasets in a correct and effective manner is needed. Although there exists data centric techniques, generic automated methodologies that can potentially solve this problem completely don't exist. Here we are developing a system that aims to gain a detailed understanding of different data modalities. Such system will provide an analysis environment that gives the user useful feedback and can aid in research tasks. In our current work, we show the initial outputs our system implementation that leverages unsupervised deep learning techniques so not to burden the user with the task of labeling input data, while still allowing for a detailed machine understanding of the data. Our goal is to be able to track objects, like cloud systems or aerosols, across different image-like data-modalities. The proposed system is flexible, scalable and robust to understand complex likenesses within multi-modal data in a similar spatio-temporal range, and also to be able to co-register and fuse these images when needed.

  16. Deep Interior: Radio Reflection Tomographic Imaging of Earth-Crossing Asteroids

    Science.gov (United States)

    Asphaug, E.; Belton, M.; Safaeinili, A.; Klaasen, K.; Ostro, S.; Yeomans, D.; Plaut, J.

    2004-12-01

    Near-Earth Objects (NEOs) present an important scientific question and an intriguing space hazard. They are scrutinized by a number of large, dedicated groundbased telescopes, and their diverse compositions are represented by thousands of well-studied meteorites. A successful program of NEO spacecraft exploration has begun, and we are proposing Deep Interior as the next logical step. Our mission objective is to image the deep interior structure of two NEOs using radio reflection tomography (RRT), in order to explore the record of asteroid origin and impact evolution, and to test the fundamental hypothesis that these important members of the solar system are rubble piles rather than consolidated bodies. Asteroid Interiors. Our mission's RRT technique is like a CAT scan from orbit. Closely sampled radar echoes yield volumetric maps of mechanical and compositional boundaries, and measure interior material dielectric properties. Exteriors. We use color imaging to explore the surface expressions of unit boundaries, in order to relate interior radar imaging to what is observable from spacecraft imaging and from Earth. Gravity and high fidelity geodesy are used to explore how interior structure is expressed in shape, density, mass distribution and spin. Diversity. We first visit a common, primitive, S-type asteroid. We next visit an asteroid that was perhaps blasted from the surface of a differentiated asteroid. We attain an up-close and inside look at two taxonomic archetypes spanning an important range of NEO mass and spin rate. Scientific focus is achieved by keeping our payload simple: Radar. A 30-m (tip-to-tip) cross-dipole antenna system operates at 5 and 15-MHz, with electronics heritage from JPL's MARSIS contribution to Mars Express, and antenna heritage from IMAGE and LACE. The 5-MHz channel is designed to penetrate >1 km of basaltic rock, and 15-MHz penetrates a few 100 m or more. They bracket the diversity of solar system materials that we are likely to

  17. Multi-Epoch Hubble Space Telescope Observations of IZw18 : Characterization of Variable Stars at Ultra-Low Metallicities

    NARCIS (Netherlands)

    Fiorentino, G.; Ramos, R. Contreras; Clementini, G.; Marconi, M.; Musella, I.; Aloisi, A.; Annibali, F.; Saha, A.; Tosi, M.; van der Marel, R. P.

    2010-01-01

    Variable stars have been identified for the first time in the very metal-poor blue compact dwarf galaxy IZw18, using deep multi-band (F606W, F814W) time-series photometry obtained with the Advanced Camera for Surveys on board the Hubble Space Telescope. We detected 34 candidate variable stars in the

  18. DEEP HST /STIS VISIBLE-LIGHT IMAGING OF DEBRIS SYSTEMS AROUND SOLAR ANALOG HOSTS

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Glenn; Gaspar, Andras [Steward Observatory and the Department of Astronomy, The University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721 (United States); Grady, Carol A. [Eureka Scientific, 2452 Delmer, Suite 100, Oakland, CA 96002 (United States); Stark, Christopher C.; Kuchner, Marc J. [NASA/Goddard Space Flight Center, Exoplanets and Stellar Astrophysics Laboratory, Code 667, Greenbelt, MD 20771 (United States); Carson, Joseph [Department of Physics and Astronomy, College of Charleston, 66 George Street, Charleston, SC 29424 (United States); Debes, John H.; Hines, Dean C.; Perrin, Marshall [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Henning, Thomas [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117, Heidelberg (Germany); Jang-Condell, Hannah [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Rodigas, Timothy J. [Department of Terrestrial Magnetism, Carnegie Institute of Washington, 5241 Branch Road, NW, Washington, DC 20015 (United States); Tamura, Motohide [The University of Tokyo, National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo, 181-8588 (Japan); Wisniewski, John P., E-mail: gschneider@as.arizona.edu [H. L. Dodge Department of Physics and Astronomy, University of Oklahoma, 440 West Brooks Street, Norman, OK 73019 (United States)

    2016-09-01

    We present new Hubble Space Telescope observations of three a priori known starlight-scattering circumstellar debris systems (CDSs) viewed at intermediate inclinations around nearby close-solar analog stars: HD 207129, HD 202628, and HD 202917. Each of these CDSs possesses ring-like components that are more massive analogs of our solar system's Edgeworth–Kuiper Belt. These systems were chosen for follow-up observations to provide imaging with higher fidelity and better sensitivity for the sparse sample of solar-analog CDSs that range over two decades in systemic ages, with HD 202628 and HD 207129 (both ∼2.3 Gyr) currently the oldest CDSs imaged in visible or near-IR light. These deep (10–14 ks) observations, made with six-roll point-spread-function template visible-light coronagraphy using the Space Telescope Imaging Spectrograph, were designed to better reveal their angularly large debris rings of diffuse/low surface brightness, and for all targets probe their exo-ring environments for starlight-scattering materials that present observational challenges for current ground-based facilities and instruments. Contemporaneously also observing with a narrower occulter position, these observations additionally probe the CDS endo-ring environments that are seen to be relatively devoid of scatterers. We discuss the morphological, geometrical, and photometric properties of these CDSs also in the context of other CDSs hosted by FGK stars that we have previously imaged as a homogeneously observed ensemble. From this combined sample we report a general decay in quiescent-disk F {sub disk}/ F {sub star} optical brightness ∼ t {sup −0.8}, similar to what is seen at thermal IR wavelengths, and CDSs with a significant diversity in scattering phase asymmetries, and spatial distributions of their starlight-scattering grains.

  19. Automated image quality evaluation of T2 -weighted liver MRI utilizing deep learning architecture.

    Science.gov (United States)

    Esses, Steven J; Lu, Xiaoguang; Zhao, Tiejun; Shanbhogue, Krishna; Dane, Bari; Bruno, Mary; Chandarana, Hersh

    2018-03-01

    To develop and test a deep learning approach named Convolutional Neural Network (CNN) for automated screening of T 2 -weighted (T 2 WI) liver acquisitions for nondiagnostic images, and compare this automated approach to evaluation by two radiologists. We evaluated 522 liver magnetic resonance imaging (MRI) exams performed at 1.5T and 3T at our institution between November 2014 and May 2016 for CNN training and validation. The CNN consisted of an input layer, convolutional layer, fully connected layer, and output layer. 351 T 2 WI were anonymized for training. Each case was annotated with a label of being diagnostic or nondiagnostic for detecting lesions and assessing liver morphology. Another independently collected 171 cases were sequestered for a blind test. These 171 T 2 WI were assessed independently by two radiologists and annotated as being diagnostic or nondiagnostic. These 171 T 2 WI were presented to the CNN algorithm and image quality (IQ) output of the algorithm was compared to that of two radiologists. There was concordance in IQ label between Reader 1 and CNN in 79% of cases and between Reader 2 and CNN in 73%. The sensitivity and the specificity of the CNN algorithm in identifying nondiagnostic IQ was 67% and 81% with respect to Reader 1 and 47% and 80% with respect to Reader 2. The negative predictive value of the algorithm for identifying nondiagnostic IQ was 94% and 86% (relative to Readers 1 and 2). We demonstrate a CNN algorithm that yields a high negative predictive value when screening for nondiagnostic T 2 WI of the liver. 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:723-728. © 2017 International Society for Magnetic Resonance in Medicine.

  20. Targeting of deep-brain structures in nonhuman primates using MR and CT Images

    Science.gov (United States)

    Chen, Antong; Hines, Catherine; Dogdas, Belma; Bone, Ashleigh; Lodge, Kenneth; O'Malley, Stacey; Connolly, Brett; Winkelmann, Christopher T.; Bagchi, Ansuman; Lubbers, Laura S.; Uslaner, Jason M.; Johnson, Colena; Renger, John; Zariwala, Hatim A.

    2015-03-01

    In vivo gene delivery in central nervous systems of nonhuman primates (NHP) is an important approach for gene therapy and animal model development of human disease. To achieve a more accurate delivery of genetic probes, precise stereotactic targeting of brain structures is required. However, even with assistance from multi-modality 3D imaging techniques (e.g. MR and CT), the precision of targeting is often challenging due to difficulties in identification of deep brain structures, e.g. the striatum which consists of multiple substructures, and the nucleus basalis of meynert (NBM), which often lack clear boundaries to supporting anatomical landmarks. Here we demonstrate a 3D-image-based intracranial stereotactic approach applied toward reproducible intracranial targeting of bilateral NBM and striatum of rhesus. For the targeting we discuss the feasibility of an atlas-based automatic approach. Delineated originally on a high resolution 3D histology-MR atlas set, the NBM and the striatum could be located on the MR image of a rhesus subject through affine and nonrigid registrations. The atlas-based targeting of NBM was compared with the targeting conducted manually by an experienced neuroscientist. Based on the targeting, the trajectories and entry points for delivering the genetic probes to the targets could be established on the CT images of the subject after rigid registration. The accuracy of the targeting was assessed quantitatively by comparison between NBM locations obtained automatically and manually, and finally demonstrated qualitatively via post mortem analysis of slices that had been labelled via Evan Blue infusion and immunohistochemistry.

  1. Deep learning classifier with optical coherence tomography images for early dental caries detection

    Science.gov (United States)

    Karimian, Nima; Salehi, Hassan S.; Mahdian, Mina; Alnajjar, Hisham; Tadinada, Aditya

    2018-02-01

    Dental caries is a microbial disease that results in localized dissolution of the mineral content of dental tissue. Despite considerable decline in the incidence of dental caries, it remains a major health problem in many societies. Early detection of incipient lesions at initial stages of demineralization can result in the implementation of non-surgical preventive approaches to reverse the demineralization process. In this paper, we present a novel approach combining deep convolutional neural networks (CNN) and optical coherence tomography (OCT) imaging modality for classification of human oral tissues to detect early dental caries. OCT images of oral tissues with various densities were input to a CNN classifier to determine variations in tissue densities resembling the demineralization process. The CNN automatically learns a hierarchy of increasingly complex features and a related classifier directly from training data sets. The initial CNN layer parameters were randomly selected. The training set is split into minibatches, with 10 OCT images per batch. Given a batch of training patches, the CNN employs two convolutional and pooling layers to extract features and then classify each patch based on the probabilities from the SoftMax classification layer (output-layer). Afterward, the CNN calculates the error between the classification result and the reference label, and then utilizes the backpropagation process to fine-tune all the layer parameters to minimize this error using batch gradient descent algorithm. We validated our proposed technique on ex-vivo OCT images of human oral tissues (enamel, cortical-bone, trabecular-bone, muscular-tissue, and fatty-tissue), which attested to effectiveness of our proposed method.

  2. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2018-02-01

    Full Text Available Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples. Therefore, a presentation attack detection (PAD method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP, local ternary pattern (LTP, and histogram of oriented gradients (HOG. As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN method to extract deep image features and the multi-level local binary pattern (MLBP method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  3. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors.

    Science.gov (United States)

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-02-26

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  4. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    Science.gov (United States)

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-01-01

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417

  5. An Analysis and Application of Fast Nonnegative Orthogonal Matching Pursuit for Image Categorization in Deep Networks

    Directory of Open Access Journals (Sweden)

    Bo Wang

    2015-01-01

    Full Text Available Nonnegative orthogonal matching pursuit (NOMP has been proven to be a more stable encoder for unsupervised sparse representation learning. However, previous research has shown that NOMP is suboptimal in terms of computational cost, as the coefficients selection and refinement using nonnegative least squares (NNLS have been divided into two separate steps. It is found that this problem severely reduces the efficiency of encoding for large-scale image patches. In this work, we study fast nonnegative OMP (FNOMP as an efficient encoder which can be accelerated by the implementation of QR factorization and iterations of coefficients in deep networks for full-size image categorization task. It is analyzed and demonstrated that using relatively simple gain-shape vector quantization for training dictionary, FNOMP not only performs more efficiently than NOMP for encoding but also significantly improves the classification accuracy compared to OMP based algorithm. In addition, FNOMP based algorithm is superior to other state-of-the-art methods on several publicly available benchmarks, that is, Oxford Flowers, UIUC-Sports, and Caltech101.

  6. DEEP LEARNING AND IMAGE PROCESSING FOR AUTOMATED CRACK DETECTION AND DEFECT MEASUREMENT IN UNDERGROUND STRUCTURES

    Directory of Open Access Journals (Sweden)

    F. Panella

    2018-05-01

    Full Text Available This work presents the combination of Deep-Learning (DL and image processing to produce an automated cracks recognition and defect measurement tool for civil structures. The authors focus on tunnel civil structures and survey and have developed an end to end tool for asset management of underground structures. In order to maintain the serviceability of tunnels, regular inspection is needed to assess their structural status. The traditional method of carrying out the survey is the visual inspection: simple, but slow and relatively expensive and the quality of the output depends on the ability and experience of the engineer as well as on the total workload (stress and tiredness may influence the ability to observe and record information. As a result of these issues, in the last decade there is the desire to automate the monitoring using new methods of inspection. The present paper has the goal of combining DL with traditional image processing to create a tool able to detect, locate and measure the structural defect.

  7. Magnetotelluric images of deep crustal structure of the Rehai geothermal field near Tengchong, southern China

    Science.gov (United States)

    Bai, Denghai; Meju, Maxwell A.; Liao, Zhijie

    2001-12-01

    Broadband (0.004-4096s) magnetotelluric (MT) soundings have been applied to the determination of the deep structure across the Rehai geothermal field in a Quaternary volcanic area near the Indo-Eurasian collisional margin. Tensorial analysis of the data show evidence of weak to strong 3-D effects but for approximate 2-D imaging, we obtained dual-mode MT responses for an assumed strike direction coincident with the trend of the regional-scale faults and with the principal impedance azimuth at long periods. The data were subsequently inverted using different approaches. The rapid relaxation inversion models are comparable to the sections constructed from depth-converted invariant impedance phase data. The results from full-domain 2-D conjugate-gradient inversion with different initial models are concordant and evoke a picture of a dome-like structure consisting of a conductive (50-1000 Ωm) cap which is about 5-6km thick in the central part of the known geothermal field and thickens outwards to about 15-20km. The anomalous structure rests on a mid-crustal zone of 20-30 Ωm resistivity extending down to about 25km depth where there appears to be a moderately resistive (>30 Ωm) substratum. The MT images are shown to be in accord with published geological, isotopic and geochemical results that suggested the presence of a magma body underneath the area of study.

  8. UVUDF: Ultraviolet Imaging of the Hubble Ultra Deep Field with Wide-Field Camera 3

    Science.gov (United States)

    Teplitz, Harry I.; Rafelski, Marc; Kurczynski, Peter; Bond, Nicholas A.; Grogin, Norman; Koekemoer, Anton M.; Atek, Hakim; Brown, Thomas M.; Coe, Dan; Colbert, James W.; Ferguson, Henry C.; Finkelstein, Steven L.; Gardner, Jonathan P.; Gawiser, Eric; Giavalisco, Mauro; Gronwall, Caryl; Hanish, Daniel J.; Lee, Kyoung-Soo; de Mello, Duilia F.; Ravindranath, Swara; Ryan, Russell E.; Siana, Brian D.; Scarlata, Claudia; Soto, Emmaris; Voyer, Elysse N.; Wolfe, Arthur M.

    2013-12-01

    We present an overview of a 90 orbit Hubble Space Telescope treasury program to obtain near-ultraviolet imaging of the Hubble Ultra Deep Field using the Wide Field Camera 3 UVIS detector with the F225W, F275W, and F336W filters. This survey is designed to: (1) investigate the episode of peak star formation activity in galaxies at 1 dropouts at redshifts 1.7, 2.1, and 2.7 is largely consistent with the number predicted by published luminosity functions. We also confirm that the image mosaics have sufficient sensitivity and resolution to support the analysis of the evolution of star-forming clumps, reaching 28-29th magnitude depth at 5σ in a 0.''2 radius aperture depending on filter and observing epoch. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are #12534.

  9. SCUBA-2 Ultra Deep Imaging EAO Survey (STUDIES): Faint-end Counts at 450 μm

    NARCIS (Netherlands)

    Wang, Wei-Hao; Lin, Wei-Ching; Lim, Chen-Fatt; Smail, Ian; Chapman, Scott C.; Zheng, Xian Zhong; Shim, Hyunjin; Kodama, Tadayuki; Almaini, Omar; Ao, Yiping; Blain, Andrew W.; Bourne, Nathan; Bunker, Andrew J.; Chang, Yu-Yen; Chao, Dani C.-Y.; Chen, Chian-Chou; Clements, David L.; Conselice, Christopher J.; Cowley, William I.; Dannerbauer, Helmut; Dunlop, James S.; Geach, James E.; Goto, Tomotsugu; Jiang, Linhua; Ivison, Rob J.; Jeong, Woong-Seob; Kohno, Kotaro; Kong, Xu; Lee, Chien-Hsu; Lee, Hyung Mok; Lee, Minju; Michałowski, Michał J.; Oteo, Iván; Sawicki, Marcin; Scott, Douglas; Shu, Xin Wen; Simpson, James M.; Tee, Wei-Leong; Toba, Yoshiki; Valiante, Elisabetta; Wang, Jun-Xian; Wang, Ran; Wardlow, Julie L.

    2017-01-01

    The SCUBA-2 Ultra Deep Imaging EAO Survey (STUDIES) is a three-year JCMT Large Program aiming to reach the 450 μm confusion limit in the COSMOS-CANDELS region to study a representative sample of the high-redshift far-infrared galaxy population that gives rise to the bulk of the far-infrared

  10. Deep and shallow structures in the Arctic region imaged by satellite magnetic and gravity data

    Science.gov (United States)

    Gaina, Carmen; Panet, Isabelle; Shephard, Grace

    2016-07-01

    , volcanic crust, but, as in the case of other oceanic Large Igneous Provinces, only deep sea drilling will be able to reveal the true nature of the underlying crust at the core of the Arctic. The oldest continental crust, usually found in the cratonic areas and as Proterozoic accreted crust, generates the largest positive magnetic anomalies. This crust contains large and deep volcanic bodies in the North American shield, Greenland, the Baltic shield in Eurasia and the Siberian platform in NE Asia, and are imaged by the satellite data. Furthermore, satellite data is not only restricted to revealing crustal and lithospheric depths. Recent workflows have shown that subducted remnants of ocean basins, now located in the lower mantle, as well as large, antipodal features on the core-mantle boundary, can be imaged by satellite gravity. Seismic tomography provides evidence for an extinct Mesozoic Arctic ocean lying around 1400 km under present-day Greenland. However, the variable resolution of seismic tomography at high latitudes, as well as ambiguity in plate reconstructions, renders the existence of the slab open to interpretation. Critically, the current location of the slab also matches perturbations in long-wavelength gravity gradients, providing further support for a deep density anomaly and a slab origin. Gravity data therefore provides a complementary and independent link in linking surface events and deep mantle structure in frontier regions like the Arctic. By revealing the present-day structure, satellite-derived magnetics and gravity offer a critical component in our understanding of Arctic history, over timescales of millions of years and scales of thousands of kilometers.

  11. ISTA-Net: Iterative Shrinkage-Thresholding Algorithm Inspired Deep Network for Image Compressive Sensing

    KAUST Repository

    Zhang, Jian

    2017-06-24

    Traditional methods for image compressive sensing (CS) reconstruction solve a well-defined inverse problem that is based on a predefined CS model, which defines the underlying structure of the problem and is generally solved by employing convergent iterative solvers. These optimization-based CS methods face the challenge of choosing optimal transforms and tuning parameters in their solvers, while also suffering from high computational complexity in most cases. Recently, some deep network based CS algorithms have been proposed to improve CS reconstruction performance, while dramatically reducing time complexity as compared to optimization-based methods. Despite their impressive results, the proposed networks (either with fully-connected or repetitive convolutional layers) lack any structural diversity and they are trained as a black box, void of any insights from the CS domain. In this paper, we combine the merits of both types of CS methods: the structure insights of optimization-based method and the performance/speed of network-based ones. We propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $l_1$ norm CS reconstruction model. ISTA-Net essentially implements a truncated form of ISTA, where all ISTA-Net parameters are learned end-to-end to minimize a reconstruction error in training. Borrowing more insights from the optimization realm, we propose an accelerated version of ISTA-Net, dubbed FISTA-Net, which is inspired by the fast iterative shrinkage-thresholding algorithm (FISTA). Interestingly, this acceleration naturally leads to skip connections in the underlying network design. Extensive CS experiments demonstrate that the proposed ISTA-Net and FISTA-Net outperform existing optimization-based and network-based CS methods by large margins, while maintaining a fast runtime.

  12. Deep Learning of Post-Wildfire Vegetation Loss using Bitemporal Synthetic Aperture Radar Images

    Science.gov (United States)

    Chen, Z.; Glasscoe, M. T.; Parker, J. W.

    2017-12-01

    Wildfire events followed by heavy precipitation have been proven causally related to breakouts of mudflow or debris flow, which, can demand rapid evacuation and threaten residential communities and civil infrastructure. For example, in the case of the city of Glendora, California, it was first afflicted by a severe wildfire in 1968 and then the flooding caused mudslides and debris flow in 1969 killed 34 people. Therefore, burn area or vegetation loss mapping due to wildfire is critical to agencies for preparing for secondary hazards, particularly flooding and flooding induced mudflow. However, rapid post-wildfire mapping of vegetation loss mapping is not readily obtained by regular remote sensing methods, e.g. various optical methods, due to the presence of smoke, haze, and rainy/cloudy conditions that often follow a wildfire event. In this paper, we will introduce and develop a deep learning-based framework that uses Synthetic Aperture Radar images collected prior to and after a wildfire event. A convolutional neural network (CNN) approach will be used that replaces traditional principle component analysis (PCA) based differencing for non-supervised change feature extraction. Using a small sample of human-labeled burned vegetation, normal vegetation, and urban built-up pixels, we will compare the performance of deep learning and PCA-based feature extraction. The 2014 Coby Fire event, which affected the downstream city of Glendora, was used to evaluate the proposed framework. The NASA's UAVSAR data (https://uavsar.jpl.nasa.gov/) will be utilized for mapping the vegetation damage due to the Coby Fire event.

  13. Automatic diagnosis of imbalanced ophthalmic images using a cost-sensitive deep convolutional neural network.

    Science.gov (United States)

    Jiang, Jiewei; Liu, Xiyang; Zhang, Kai; Long, Erping; Wang, Liming; Li, Wangting; Liu, Lin; Wang, Shuai; Zhu, Mingmin; Cui, Jiangtao; Liu, Zhenzhen; Lin, Zhuoling; Li, Xiaoyan; Chen, Jingjing; Cao, Qianzhong; Li, Jing; Wu, Xiaohang; Wang, Dongni; Wang, Jinghui; Lin, Haotian

    2017-11-21

    Ocular images play an essential role in ophthalmological diagnoses. Having an imbalanced dataset is an inevitable issue in automated ocular diseases diagnosis; the scarcity of positive samples always tends to result in the misdiagnosis of severe patients during the classification task. Exploring an effective computer-aided diagnostic method to deal with imbalanced ophthalmological dataset is crucial. In this paper, we develop an effective cost-sensitive deep residual convolutional neural network (CS-ResCNN) classifier to diagnose ophthalmic diseases using retro-illumination images. First, the regions of interest (crystalline lens) are automatically identified via twice-applied Canny detection and Hough transformation. Then, the localized zones are fed into the CS-ResCNN to extract high-level features for subsequent use in automatic diagnosis. Second, the impacts of cost factors on the CS-ResCNN are further analyzed using a grid-search procedure to verify that our proposed system is robust and efficient. Qualitative analyses and quantitative experimental results demonstrate that our proposed method outperforms other conventional approaches and offers exceptional mean accuracy (92.24%), specificity (93.19%), sensitivity (89.66%) and AUC (97.11%) results. Moreover, the sensitivity of the CS-ResCNN is enhanced by over 13.6% compared to the native CNN method. Our study provides a practical strategy for addressing imbalanced ophthalmological datasets and has the potential to be applied to other medical images. The developed and deployed CS-ResCNN could serve as computer-aided diagnosis software for ophthalmologists in clinical application.

  14. Magnetic resonance direct thrombus imaging of the evolution of acute deep vein thrombosis of the leg.

    Science.gov (United States)

    Westerbeek, R E; Van Rooden, C J; Tan, M; Van Gils, A P G; Kok, S; De Bats, M J; De Roos, A; Huisman, M V

    2008-07-01

    Accurate diagnosis of acute recurrent deep vein thrombosis (DVT) is relevant to avoid improper diagnosis and unnecessary life-long anticoagulant treatment. Compression ultrasound has high accuracy for a first episode of DVT, but is often unreliable in suspected recurrent disease. Magnetic resonance direct thrombus imaging (MR DTI) has been shown to accurately detect acute DVT. The purpose of this prospective study was to determine the MR signal change during 6 months follow-up in patients with acute DVT. This study was a prospective study of 43 consecutive patients with a first episode of acute DVT demonstrated by compression ultrasound. All patients underwent MR DTI. Follow-up was performed with MR-DTI and compression ultrasound at 3 and 6 months respectively. All data were coded, stored and assessed by two blinded observers. MR direct thrombus imaging identified acute DVT in 41 of 43 patients (sensitivity 95%). There was no abnormal MR-signal in controls, or in the contralateral extremity of patients with DVT (specificity 100%). In none of the 39 patients available at 6 months follow-up was the abnormal MR-signal at the initial acute DVT observed, whereas in 12 of these patients (30.8%) compression ultrasound was still abnormal. Magnetic resonance direct thrombus imaging normalizes over a period of 6 months in all patients with diagnosed DVT, while compression ultrasound remains abnormal in a third of these patients. MR-DTI may potentially allow for accurate detection in patients with acute suspected recurrent DVT, and this should be studied prospectively.

  15. Thinning Mechanism of the South China Sea Crust: New Insight from the Deep Crustal Images

    Science.gov (United States)

    Chang, S. P.; Pubellier, M. F.; Delescluse, M.; Qiu, Y.; Liang, Y.; Chamot-Rooke, N. R. A.; Nie, X.; Wang, J.

    2017-12-01

    The passive margin in the South China Sea (SCS) has experienced a long-lived extension period from Paleocene to late Miocene, as well as an extreme stretching which implies an unusual fault system to accommodate the whole amount of extension. Previous interpretations of the fault system need to be revised to explain the amount of strain. We study a long multichannel seismic profile crossing the whole rifted margin in the southwest of SCS, using 6 km- and 8 km-long streamers. After de-multiple processing by SRME, Radon and F-K filtering, an enhanced image of the crustal geometry, especially on the deep crust, allows us to illustrate two levels of detachment at depth. The deeper detachment is around 7-8 sec TWT in the profile. The faults rooting at this detachment are characterized by large offset and are responsible for thicker synrift sediment. A few of these faults appear to reach the Moho. The geometry of the acoustic basement between these boundary faults suggests gentle tilting with a long wavelength ( 200km), and implies some internal deformation. The shallower detachment is located around 4-5 sec TWT. The faults rooting at this detachment represent smaller offset, a shorter wavelength of the basement and thinner packages of synrift sediment. Two detachments separate the crust into upper, middle and lower crust. If the lower crust shows ductile behavior, the upper and middle crust is mostly brittle and form large wavelength boudinage structure, and the internal deformation of the boudins might imply low friction detachments at shallower levels. The faults rooting to deep detachment have activated during the whole rifting period until the breakup. Within the upper and middle crust, the faults resulted in important tilting of the basement at shallow depth, and connect to the deep detachment at some places. The crustal geometry illustrates how the two detachments are important for the thinning process, and also constitute a pathway for the following magmatic

  16. THE HST/ACS COMA CLUSTER SURVEY. II. DATA DESCRIPTION AND SOURCE CATALOGS

    International Nuclear Information System (INIS)

    Hammer, Derek; Verdoes Kleijn, Gijs; Den Brok, Mark; Peletier, Reynier F.; Hoyos, Carlos; Balcells, Marc; Aguerri, Alfonso L.; Ferguson, Henry C.; Goudfrooij, Paul; Carter, David; Guzman, Rafael; Smith, Russell J.; Lucey, John R.; Graham, Alister W.; Trentham, Neil; Peng, Eric; Puzia, Thomas H.; Jogee, Shardha; Batcheldor, Dan; Bridges, Terry J.

    2010-01-01

    The Coma cluster, Abell 1656, was the target of an HST-ACS Treasury program designed for deep imaging in the F475W and F814W passbands. Although our survey was interrupted by the ACS instrument failure in early 2007, the partially completed survey still covers ∼50% of the core high-density region in Coma. Observations were performed for 25 fields that extend over a wide range of cluster-centric radii (∼1.75 Mpc or 1 0 ) with a total coverage area of 274 arcmin 2 . The majority of the fields are located near the core region of Coma (19/25 pointings) with six additional fields in the southwest region of the cluster. In this paper, we present reprocessed images and SEXTRACTOR source catalogs for our survey fields, including a detailed description of the methodology used for object detection and photometry, the subtraction of bright galaxies to measure faint underlying objects, and the use of simulations to assess the photometric accuracy and completeness of our catalogs. We also use simulations to perform aperture corrections for the SEXTRACTOR Kron magnitudes based only on the measured source flux and its half-light radius. We have performed photometry for ∼73,000 unique objects; approximately one-half of our detections are brighter than the 10σ point-source detection limit at F814W = 25.8 mag (AB). The slight majority of objects (60%) are unresolved or only marginally resolved by ACS. We estimate that Coma members are 5%-10% of all source detections, which consist of a large population of unresolved compact sources (primarily globular clusters but also ultra-compact dwarf galaxies) and a wide variety of extended galaxies from a cD galaxy to dwarf low surface brightness galaxies. The red sequence of Coma member galaxies has a color-magnitude relation with a constant slope and dispersion over 9 mag (-21 F814W < -13). The initial data release for the HST-ACS Coma Treasury program was made available to the public in 2008 August. The images and catalogs described

  17. The application of deep confidence network in the problem of image recognition

    Directory of Open Access Journals (Sweden)

    Chumachenko О.І.

    2016-12-01

    Full Text Available In order to study the concept of deep learning, in particular the substitution of multilayer perceptron on the corresponding network of deep confidence, computer simulations of the learning process to test voters was carried out. Multi-layer perceptron has been replaced by a network of deep confidence, consisting of successive limited Boltzmann machines. After training of a network of deep confidence algorithm of layer-wise training it was found that the use of networks of deep confidence greatly improves the accuracy of multilayer perceptron training by method of reverse distribution errors.

  18. Deep inspiration breath-hold radiotherapy for lung cancer: impact on image quality and registration uncertainty in cone beam CT image guidance

    DEFF Research Database (Denmark)

    Josipovic, Mirjana; Persson, Gitte F; Bangsgaard, Jens Peter

    2016-01-01

    OBJECTIVE: We investigated the impact of deep inspiration breath-hold (DIBH) and tumour baseline shifts on image quality and registration uncertainty in image-guided DIBH radiotherapy (RT) for locally advanced lung cancer. METHODS: Patients treated with daily cone beam CT (CBCT)-guided free...... for the craniocaudal direction in FB, where it was >3 mm. On the 31st fraction, the intraobserver uncertainty increased compared with the second fraction. This increase was more pronounced in FB. Image quality scores improved in DIBH compared with FB for all parameters in all patients. Simulated tumour baseline shifts...... ≤2 mm did not affect the CBCT image quality considerably. CONCLUSION: DIBH CBCT improved image quality and reduced registration uncertainty in the craniocaudal direction in image-guided RT of locally advanced lung cancer. Baseline shifts ≤2 mm in DIBH during CBCT acquisition did not affect image...

  19. Technical Note: A deep learning-based autosegmentation of rectal tumors in MR images.

    Science.gov (United States)

    Wang, Jiazhou; Lu, Jiayu; Qin, Gan; Shen, Lijun; Sun, Yiqun; Ying, Hongmei; Zhang, Zhen; Hu, Weigang

    2018-04-16

    Manual contouring of gross tumor volumes (GTV) is a crucial and time-consuming process in rectum cancer radiotherapy. This study aims to develop a simple deep learning-based autosegmentation algorithm to segment rectal tumors on T2-weighted MR images. MRI scans (3T, T2-weighted) of 93 patients with locally advanced (cT3-4 and/or cN1-2) rectal cancer treated with neoadjuvant chemoradiotherapy followed by surgery were enrolled in this study. A 2D U-net similar network was established as a training model. The model was trained in two phases to increase efficiency. These phases were tumor recognition and tumor segmentation. An opening (erosion and dilation) process was implemented to smooth contours after segmentation. Data were randomly separated into training (90%) and validation (10%) datasets for a 10-folder cross-validation. Additionally, 20 patients were double contoured for performance evaluation. Four indices were calculated to evaluate the similarity of automated and manual segmentation, including Hausdorff distance (HD), average surface distance (ASD), Dice index (DSC), and Jaccard index (JSC). The DSC, JSC, HD, and ASD (mean ± SD) were 0.74 ± 0.14, 0.60 ± 0.16, 20.44 ± 13.35, and 3.25 ± 1.69 mm for validation dataset; and these indices were 0.71 ± 0.13, 0.57 ± 0.15, 14.91 ± 7.62, and 2.67 ± 1.46 mm between two human radiation oncologists, respectively. No significant difference has been observed between automated segmentation and manual segmentation considering DSC (P = 0.42), JSC (P = 0.35), HD (P = 0.079), and ASD (P = 0.16). However, significant difference was found for HD (P = 0.0027) without opening process. This study showed that a simple deep learning neural network can perform segmentation for rectum cancer based on MRI T2 images with results comparable to a human. © 2018 American Association of Physicists in Medicine.

  20. Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology.

    Science.gov (United States)

    Sharma, Harshita; Zerbe, Norman; Klempert, Iris; Hellwich, Olaf; Hufnagl, Peter

    2017-11-01

    Deep learning using convolutional neural networks is an actively emerging field in histological image analysis. This study explores deep learning methods for computer-aided classification in H&E stained histopathological whole slide images of gastric carcinoma. An introductory convolutional neural network architecture is proposed for two computerized applications, namely, cancer classification based on immunohistochemical response and necrosis detection based on the existence of tumor necrosis in the tissue. Classification performance of the developed deep learning approach is quantitatively compared with traditional image analysis methods in digital histopathology requiring prior computation of handcrafted features, such as statistical measures using gray level co-occurrence matrix, Gabor filter-bank responses, LBP histograms, gray histograms, HSV histograms and RGB histograms, followed by random forest machine learning. Additionally, the widely known AlexNet deep convolutional framework is comparatively analyzed for the corresponding classification problems. The proposed convolutional neural network architecture reports favorable results, with an overall classification accuracy of 0.6990 for cancer classification and 0.8144 for necrosis detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. THE TAIWAN ECDFS NEAR-INFRARED SURVEY: ULTRA-DEEP J AND K{sub S} IMAGING IN THE EXTENDED CHANDRA DEEP FIELD-SOUTH

    Energy Technology Data Exchange (ETDEWEB)

    Hsieh, Bau-Ching; Wang, Wei-Hao; Hsieh, Chih-Chiang; Lin, Lihwai; Lim, Jeremy; Ho, Paul T. P. [Institute of Astrophysics and Astronomy, Academia Sinica, P.O. Box 23-141, Taipei 106, Taiwan (China); Yan Haojing [Department of Physics and Astronomy, University of Missouri, Columbia, MO 65211 (United States)

    2012-12-15

    We present ultra-deep J and K{sub S} imaging observations covering a 30' Multiplication-Sign 30' area of the Extended Chandra Deep Field-South (ECDFS) carried out by our Taiwan ECDFS Near-Infrared Survey (TENIS). The median 5{sigma} limiting magnitudes for all detected objects in the ECDFS reach 24.5 and 23.9 mag (AB) for J and K{sub S} , respectively. In the inner 400 arcmin{sup 2} region where the sensitivity is more uniform, objects as faint as 25.6 and 25.0 mag are detected at 5{sigma}. Thus, this is by far the deepest J and K{sub S} data sets available for the ECDFS. To combine TENIS with the Spitzer IRAC data for obtaining better spectral energy distributions of high-redshift objects, we developed a novel deconvolution technique (IRACLEAN) to accurately estimate the IRAC fluxes. IRACLEAN can minimize the effect of blending in the IRAC images caused by the large point-spread functions and reduce the confusion noise. We applied IRACLEAN to the images from the Spitzer IRAC/MUSYC Public Legacy in the ECDFS survey (SIMPLE) and generated a J+K{sub S} -selected multi-wavelength catalog including the photometry of both the TENIS near-infrared and the SIMPLE IRAC data. We publicly release the data products derived from this work, including the J and K{sub S} images and the J+K{sub S} -selected multi-wavelength catalog.

  2. Application of radiographic images in diagnosis and treatment of deep neck infections with necrotizing fasciitis: a case report

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Joo; Kim, Ju Dong; Ryu, Hye In; Cho, Yeon Hee; Kong, Jun Ha; Ohe, Joo Young; Kwon, Yong Dae; Choi, Byung Joon; Kim, Gyu Tae [School of Dentistry, Kyung Hee University, Seoul (Korea, Republic of)

    2011-12-15

    The advent and wide use of antibiotics have decreased the incidence of deep neck infection. When a deep neck infection does occur, however, it can be the cause of significant morbidity and death, resulting in airway obstruction, mediastinitis, pericarditis, epidural abscesses, and major vessel erosion. In our clinic, a patient with diffuse chronic osteomyelitis of mandible and fascial space abscess and necrotic fasciitis due to odontogenic infection at the time of first visit came. We successfully treated the patient by early diagnosis using contrast-enhanced CT and follow up dressing through the appropriate use of radiographic images.

  3. Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases.

    Science.gov (United States)

    Janowczyk, Andrew; Madabhushi, Anant

    2016-01-01

    Deep learning (DL) is a representation learning approach ideally suited for image analysis challenges in digital pathology (DP). The variety of image analysis tasks in the context of DP includes detection and counting (e.g., mitotic events), segmentation (e.g., nuclei), and tissue classification (e.g., cancerous vs. non-cancerous). Unfortunately, issues with slide preparation, variations in staining and scanning across sites, and vendor platforms, as well as biological variance, such as the presentation of different grades of disease, make these image analysis tasks particularly challenging. Traditional approaches, wherein domain-specific cues are manually identified and developed into task-specific "handcrafted" features, can require extensive tuning to accommodate these variances. However, DL takes a more domain agnostic approach combining both feature discovery and implementation to maximally discriminate between the classes of interest. While DL approaches have performed well in a few DP related image analysis tasks, such as detection and tissue classification, the currently available open source tools and tutorials do not provide guidance on challenges such as (a) selecting appropriate magnification, (b) managing errors in annotations in the training (or learning) dataset, and (c) identifying a suitable training set containing information rich exemplars. These foundational concepts, which are needed to successfully translate the DL paradigm to DP tasks, are non-trivial for (i) DL experts with minimal digital histology experience, and (ii) DP and image processing experts with minimal DL experience, to derive on their own, thus meriting a dedicated tutorial. This paper investigates these concepts through seven unique DP tasks as use cases to elucidate techniques needed to produce comparable, and in many cases, superior to results from the state-of-the-art hand-crafted feature-based classification approaches. Specifically, in this tutorial on DL for DP image

  4. Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases

    Directory of Open Access Journals (Sweden)

    Andrew Janowczyk

    2016-01-01

    Full Text Available Background: Deep learning (DL is a representation learning approach ideally suited for image analysis challenges in digital pathology (DP. The variety of image analysis tasks in the context of DP includes detection and counting (e.g., mitotic events, segmentation (e.g., nuclei, and tissue classification (e.g., cancerous vs. non-cancerous. Unfortunately, issues with slide preparation, variations in staining and scanning across sites, and vendor platforms, as well as biological variance, such as the presentation of different grades of disease, make these image analysis tasks particularly challenging. Traditional approaches, wherein domain-specific cues are manually identified and developed into task-specific "handcrafted" features, can require extensive tuning to accommodate these variances. However, DL takes a more domain agnostic approach combining both feature discovery and implementation to maximally discriminate between the classes of interest. While DL approaches have performed well in a few DP related image analysis tasks, such as detection and tissue classification, the currently available open source tools and tutorials do not provide guidance on challenges such as (a selecting appropriate magnification, (b managing errors in annotations in the training (or learning dataset, and (c identifying a suitable training set containing information rich exemplars. These foundational concepts, which are needed to successfully translate the DL paradigm to DP tasks, are non-trivial for (i DL experts with minimal digital histology experience, and (ii DP and image processing experts with minimal DL experience, to derive on their own, thus meriting a dedicated tutorial. Aims: This paper investigates these concepts through seven unique DP tasks as use cases to elucidate techniques needed to produce comparable, and in many cases, superior to results from the state-of-the-art hand-crafted feature-based classification approaches. Results : Specifically, in

  5. Automatical and accurate segmentation of cerebral tissues in fMRI dataset with combination of image processing and deep learning

    Science.gov (United States)

    Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting

    2018-02-01

    Image segmentation plays an important role in medical science. One application is multimodality imaging, especially the fusion of structural imaging with functional imaging, which includes CT, MRI and new types of imaging technology such as optical imaging to obtain functional images. The fusion process require precisely extracted structural information, in order to register the image to it. Here we used image enhancement, morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in deep learning way. Such approach greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. The contours of the borders of different tissues on all images were accurately extracted and 3D visualized. This can be used in low-level light therapy and optical simulation software such as MCVM. We obtained a precise three-dimensional distribution of brain, which offered doctors and researchers quantitative volume data and detailed morphological characterization for personal precise medicine of Cerebral atrophy/expansion. We hope this technique can bring convenience to visualization medical and personalized medicine.

  6. Combining deep residual neural network features with supervised machine learning algorithms to classify diverse food image datasets.

    Science.gov (United States)

    McAllister, Patrick; Zheng, Huiru; Bond, Raymond; Moorhead, Anne

    2018-04-01

    Obesity is increasing worldwide and can cause many chronic conditions such as type-2 diabetes, heart disease, sleep apnea, and some cancers. Monitoring dietary intake through food logging is a key method to maintain a healthy lifestyle to prevent and manage obesity. Computer vision methods have been applied to food logging to automate image classification for monitoring dietary intake. In this work we applied pretrained ResNet-152 and GoogleNet convolutional neural networks (CNNs), initially trained using ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset with MatConvNet package, to extract features from food image datasets; Food 5K, Food-11, RawFooT-DB, and Food-101. Deep features were extracted from CNNs and used to train machine learning classifiers including artificial neural network (ANN), support vector machine (SVM), Random Forest, and Naive Bayes. Results show that using ResNet-152 deep features with SVM with RBF kernel can accurately detect food items with 99.4% accuracy using Food-5K validation food image dataset and 98.8% with Food-5K evaluation dataset using ANN, SVM-RBF, and Random Forest classifiers. Trained with ResNet-152 features, ANN can achieve 91.34%, 99.28% when applied to Food-11 and RawFooT-DB food image datasets respectively and SVM with RBF kernel can achieve 64.98% with Food-101 image dataset. From this research it is clear that using deep CNN features can be used efficiently for diverse food item image classification. The work presented in this research shows that pretrained ResNet-152 features provide sufficient generalisation power when applied to a range of food image classification tasks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Deep gluteal syndrome: anatomy, imaging, and management of sciatic nerve entrapments in the subgluteal space

    International Nuclear Information System (INIS)

    Hernando, Moises Fernandez; Cerezal, Luis; Perez-Carro, Luis; Abascal, Faustino; Canga, Ana

    2015-01-01

    Deep gluteal syndrome (DGS) is an underdiagnosed entity characterized by pain and/or dysesthesias in the buttock area, hip or posterior thigh and/or radicular pain due to a non-discogenic sciatic nerve entrapment in the subgluteal space. Multiple pathologies have been incorporated in this all-included ''piriformis syndrome,'' a term that has nothing to do with the presence of fibrous bands, obturator internus/gemellus syndrome, quadratus femoris/ischiofemoral pathology, hamstring conditions, gluteal disorders and orthopedic causes. The concept of fibrous bands playing a role in causing symptoms related to sciatic nerve mobility and entrapment represents a radical change in the current diagnosis of and therapeutic approach to DGS. The development of periarticular hip endoscopy has led to an understanding of the pathophysiological mechanisms underlying piriformis syndrome, which has supported its further classification. A broad spectrum of known pathologies may be located nonspecifically in the subgluteal space and can therefore also trigger DGS. These can be classified as traumatic, iatrogenic, inflammatory/infectious, vascular, gynecologic and tumors/pseudo-tumors. Because of the ever-increasing use of advanced magnetic resonance neurography (MRN) techniques and the excellent outcomes of the new endoscopic treatment, radiologists must be aware of the anatomy and pathologic conditions of this space. MR imaging is the diagnostic procedure of choice for assessing DGS and may substantially influence the management of these patients. The infiltration test not only has a high diagnostic but also a therapeutic value. This article describes the subgluteal space anatomy, reviews known and new etiologies of DGS, and assesses the role of the radiologist in the diagnosis, treatment and postoperative evaluation of sciatic nerve entrapments, with emphasis on MR imaging and endoscopic correlation. (orig.)

  8. UVUDF: Ultraviolet imaging of the Hubble ultra deep field with wide-field camera 3

    Energy Technology Data Exchange (ETDEWEB)

    Teplitz, Harry I.; Rafelski, Marc; Colbert, James W.; Hanish, Daniel J. [Infrared Processing and Analysis Center, MS 100-22, Caltech, Pasadena, CA 91125 (United States); Kurczynski, Peter; Gawiser, Eric [Department of Physics and Astronomy, Rutgers University, Piscataway, NJ 08854 (United States); Bond, Nicholas A.; Gardner, Jonathan P.; De Mello, Duilia F. [Laboratory for Observational Cosmology, Astrophysics Science Division, Code 665, Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Grogin, Norman; Koekemoer, Anton M.; Brown, Thomas M.; Coe, Dan; Ferguson, Henry C. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Atek, Hakim [Laboratoire d' Astrophysique, École Polytechnique Fédérale de Lausanne (EPFL), Observatoire, CH-1290 Sauverny (Switzerland); Finkelstein, Steven L. [Department of Astronomy, The University of Texas at Austin, Austin, TX 78712 (United States); Giavalisco, Mauro [Astronomy Department, University of Massachusetts, Amherst, MA 01003 (United States); Gronwall, Caryl [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Lee, Kyoung-Soo [Department of Physics, Purdue University, 525 Northwestern Avenue, West Lafayette, IN 47907 (United States); Ravindranath, Swara, E-mail: hit@ipac.caltech.edu [Inter-University Centre for Astronomy and Astrophysics, Pune (India); and others

    2013-12-01

    We present an overview of a 90 orbit Hubble Space Telescope treasury program to obtain near-ultraviolet imaging of the Hubble Ultra Deep Field using the Wide Field Camera 3 UVIS detector with the F225W, F275W, and F336W filters. This survey is designed to: (1) investigate the episode of peak star formation activity in galaxies at 1 < z < 2.5; (2) probe the evolution of massive galaxies by resolving sub-galactic units (clumps); (3) examine the escape fraction of ionizing radiation from galaxies at z ∼ 2-3; (4) greatly improve the reliability of photometric redshift estimates; and (5) measure the star formation rate efficiency of neutral atomic-dominated hydrogen gas at z ∼ 1-3. In this overview paper, we describe the survey details and data reduction challenges, including both the necessity of specialized calibrations and the effects of charge transfer inefficiency. We provide a stark demonstration of the effects of charge transfer inefficiency on resultant data products, which when uncorrected, result in uncertain photometry, elongation of morphology in the readout direction, and loss of faint sources far from the readout. We agree with the STScI recommendation that future UVIS observations that require very sensitive measurements use the instrument's capability to add background light through a 'post-flash'. Preliminary results on number counts of UV-selected galaxies and morphology of galaxies at z ∼ 1 are presented. We find that the number density of UV dropouts at redshifts 1.7, 2.1, and 2.7 is largely consistent with the number predicted by published luminosity functions. We also confirm that the image mosaics have sufficient sensitivity and resolution to support the analysis of the evolution of star-forming clumps, reaching 28-29th magnitude depth at 5σ in a 0.''2 radius aperture depending on filter and observing epoch.

  9. Deep gluteal syndrome: anatomy, imaging, and management of sciatic nerve entrapments in the subgluteal space

    Energy Technology Data Exchange (ETDEWEB)

    Hernando, Moises Fernandez; Cerezal, Luis; Perez-Carro, Luis; Abascal, Faustino; Canga, Ana [Diagnostico Medico Cantabria (DMC), Department of Radiology, Santander, Cantabria (Spain); Valdecilla University Hospital, Orthopedic Surgery Department Clinica Mompia (L.P.C.), Santander, Cantabria (Spain); Valdecilla University Hospital, Department of Radiology, Santander, Cantabria (Spain)

    2015-03-05

    Deep gluteal syndrome (DGS) is an underdiagnosed entity characterized by pain and/or dysesthesias in the buttock area, hip or posterior thigh and/or radicular pain due to a non-discogenic sciatic nerve entrapment in the subgluteal space. Multiple pathologies have been incorporated in this all-included ''piriformis syndrome,'' a term that has nothing to do with the presence of fibrous bands, obturator internus/gemellus syndrome, quadratus femoris/ischiofemoral pathology, hamstring conditions, gluteal disorders and orthopedic causes. The concept of fibrous bands playing a role in causing symptoms related to sciatic nerve mobility and entrapment represents a radical change in the current diagnosis of and therapeutic approach to DGS. The development of periarticular hip endoscopy has led to an understanding of the pathophysiological mechanisms underlying piriformis syndrome, which has supported its further classification. A broad spectrum of known pathologies may be located nonspecifically in the subgluteal space and can therefore also trigger DGS. These can be classified as traumatic, iatrogenic, inflammatory/infectious, vascular, gynecologic and tumors/pseudo-tumors. Because of the ever-increasing use of advanced magnetic resonance neurography (MRN) techniques and the excellent outcomes of the new endoscopic treatment, radiologists must be aware of the anatomy and pathologic conditions of this space. MR imaging is the diagnostic procedure of choice for assessing DGS and may substantially influence the management of these patients. The infiltration test not only has a high diagnostic but also a therapeutic value. This article describes the subgluteal space anatomy, reviews known and new etiologies of DGS, and assesses the role of the radiologist in the diagnosis, treatment and postoperative evaluation of sciatic nerve entrapments, with emphasis on MR imaging and endoscopic correlation. (orig.)

  10. DEEP IMAGING OF M51: A NEW VIEW OF THE WHIRLPOOL’S EXTENDED TIDAL DEBRIS

    International Nuclear Information System (INIS)

    Watkins, Aaron E.; Mihos, J. Christopher; Harding, Paul

    2015-01-01

    We present deep, wide-field imaging of the M51 system using CWRU’s Burrell Schmidt Telescope at KPNO to study the faint tidal features that constrain its interaction history. Our images trace M51's tidal morphology down to a limiting surface brightness of μ B,lim ∼ 30 mag arcsec −2 and provide accurate colors (σ B−V <0.1) down to μ B ∼ 28. We identify two new tidal streams in the system (the south and northeast plumes) with surface brightnesses of μ B = 29 and luminosities of ∼10 6 L ⊙,B . While the northeast plume may be a faint outer extension of the tidal “crown” north of NGC 5195 (M51b), the south plume has no analog in any existing M51 simulation and may represent a distinct tidal stream or disrupted dwarf galaxy. We also trace the extremely diffuse northwest plume out to a total extent of 20′ (43 kpc) from NGC 5194 (M51a) and show it to be physically distinct from the overlapping bright tidal streams from M51b. The northwest plume’s morphology and red color (B−V=0.8) instead argue that it originated from tidal stripping of M51a’s extreme outer disk. Finally, we confirm the strong segregation of gas and stars in the southeast tail and do not detect any diffuse stellar component in the H i portion of the tail. Extant simulations of M51 have difficulty matching both the wealth of tidal structure in the system and the lack of stars in the H i tail, motivating new modeling campaigns to study the dynamical evolution of this classic interacting system

  11. Image-guided modified deep anterior lamellar keratoplasty (DALK) corneal transplant using intraoperative optical coherence tomography

    Science.gov (United States)

    Tao, Yuankai K.; LaBarbera, Michael; Ehlers, Justis P.; Srivastava, Sunil K.; Dupps, William J.

    2015-03-01

    Deep anterior lamellar keratoplasty (DALK) is an alternative to full-thickness corneal transplant and has advantages including the absence of allograft rejection; shortened duration of topical corticosteroid treatment and reduced associated risk of glaucoma, cataract, or infection; and enables use of grafts with poor endothelial quality. DALK begins by performing a trephination of approximately 80% stromal thickness, as measured by pachymetry. After removal of the anterior stoma, a needle is inserted into the residual stroma to inject air or viscoelastic to dissect Descemet's membrane. These procedures are inherently difficult and intraoperative rates of Descemet's membrane perforation between 4-39% have been reported. Optical coherence tomography (OCT) provides high-resolution images of tissue microstructures in the cornea, including Descemet's membrane, and allows quantitation of corneal layer thicknesses. Here, we use crosssectional intraoperative OCT (iOCT) measurements of corneal thickness during surgery and a novel micrometeradjustable biopsy punch to precision-cut the stroma down to Descemet's membrane. Our prototype cutting tool allows us to establish a dissection plane at the corneal endothelium interface, mitigates variability in cut-depths as a result of tremor, reduces procedure complexity, and reduces complication rates. iOCT-guided modified DALK procedures were performed on 47 cadaveric porcine eyes by non-experts and achieved a perforation rate of ~5% with a mean corneal dissection time care.

  12. Diffusion tensor imaging and neuromodulation: DTI as key technology for deep brain stimulation.

    Science.gov (United States)

    Coenen, Volker Arnd; Schlaepfer, Thomas E; Allert, Niels; Mädler, Burkhard

    2012-01-01

    Diffusion tensor imaging (DTI) is more than just a useful adjunct to invasive techniques like optogenetics which recently have tremendously influenced our understanding of the mechanisms of deep brain stimulation (DBS). In combination with other technologies, DTI helps us to understand which parts of the brain tissue are connected to others and which ones are truly influenced with neuromodulation. The complex interaction of DBS with the surrounding tissues-scrutinized with DTI-allows to create testable hypotheses that can explain network interactions. Those interactions are vital for our understanding of the net effects of neuromodulation. This work naturally was first done in the field of movement disorder surgery, where a lot of experience regarding therapeutic effects and only a short latency between initiation of neuromodulation and alleviation of symptoms exist. This chapter shows the journey over the past 10 years with first applications in DBS toward current research in affect regulating network balances and their therapeutic alterations with the neuromodulation technology. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Image-based deep learning for classification of noise transients in gravitational wave detectors

    Science.gov (United States)

    Razzano, Massimiliano; Cuoco, Elena

    2018-05-01

    The detection of gravitational waves has inaugurated the era of gravitational astronomy and opened new avenues for the multimessenger study of cosmic sources. Thanks to their sensitivity, the Advanced LIGO and Advanced Virgo interferometers will probe a much larger volume of space and expand the capability of discovering new gravitational wave emitters. The characterization of these detectors is a primary task in order to recognize the main sources of noise and optimize the sensitivity of interferometers. Glitches are transient noise events that can impact the data quality of the interferometers and their classification is an important task for detector characterization. Deep learning techniques are a promising tool for the recognition and classification of glitches. We present a classification pipeline that exploits convolutional neural networks to classify glitches starting from their time-frequency evolution represented as images. We evaluated the classification accuracy on simulated glitches, showing that the proposed algorithm can automatically classify glitches on very fast timescales and with high accuracy, thus providing a promising tool for online detector characterization.

  14. Semantic knowledge for histopathological image analysis: from ontologies to processing portals and deep learning

    Science.gov (United States)

    Kergosien, Yannick L.; Racoceanu, Daniel

    2017-11-01

    , major changes are also to be expected for the relation of human diagnosis to machine based procedures. Improving on a former imaging platform which used a local knowledge base and a reasoning engine to combine image processing modules into higher level tasks, we propose a framework where different actors of the histopathology imaging world can cooperate using web services - exchanging knowledge as well as imaging services - and where the results of such collaborations on diagnostic related tasks can be evaluated in international challenges such as those recently organized for mitosis detection, nuclear atypia, or tissue architecture in the context of cancer grading. This framework is likely to offer an effective context-guidance and traceability to Deep Learning approaches, with an interesting promising perspective given by the multi-task learning (MTL) paradigm, distinguished by its applicability to several different learning algorithms, its non- reliance on specialized architectures and the promising results demonstrated, in particular towards the problem of weak supervision-, an issue found when direct links from pathology terms in reports to corresponding regions within images are missing.

  15. Effective deep learning training for single-image super-resolution in endomicroscopy exploiting video-registration-based reconstruction.

    Science.gov (United States)

    Ravì, Daniele; Szczotka, Agnieszka Barbara; Shakir, Dzhoshkun Ismail; Pereira, Stephen P; Vercauteren, Tom

    2018-06-01

    Probe-based confocal laser endomicroscopy (pCLE) is a recent imaging modality that allows performing in vivo optical biopsies. The design of pCLE hardware, and its reliance on an optical fibre bundle, fundamentally limits the image quality with a few tens of thousands fibres, each acting as the equivalent of a single-pixel detector, assembled into a single fibre bundle. Video registration techniques can be used to estimate high-resolution (HR) images by exploiting the temporal information contained in a sequence of low-resolution (LR) images. However, the alignment of LR frames, required for the fusion, is computationally demanding and prone to artefacts. In this work, we propose a novel synthetic data generation approach to train exemplar-based Deep Neural Networks (DNNs). HR pCLE images with enhanced quality are recovered by the models trained on pairs of estimated HR images (generated by the video registration algorithm) and realistic synthetic LR images. Performance of three different state-of-the-art DNNs techniques were analysed on a Smart Atlas database of 8806 images from 238 pCLE video sequences. The results were validated through an extensive image quality assessment that takes into account different quality scores, including a Mean Opinion Score (MOS). Results indicate that the proposed solution produces an effective improvement in the quality of the obtained reconstructed image. The proposed training strategy and associated DNNs allows us to perform convincing super-resolution of pCLE images.

  16. ON THE PROGENITOR SYSTEM OF THE TYPE Iax SUPERNOVA 2014dt IN M61

    Energy Technology Data Exchange (ETDEWEB)

    Foley, Ryan J. [Astronomy Department, University of Illinois at Urbana-Champaign, 1002 West Green Street, Urbana, IL 61801 (United States); Van Dyk, Schuyler D. [IPAC/Caltech, Mail Code 100-22, Pasadena, CA 91125 (United States); Jha, Saurabh W. [Department of Physics and Astronomy, Rutgers, The State University of New Jersey, 136 Frelinghuysen Road, Piscataway, NJ 08854 (United States); Clubb, Kelsey I.; Filippenko, Alexei V.; Mauerhan, Jon C. [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States); Miller, Adam A. [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, MS 169-506, Pasadena, CA 91109 (United States); Smith, Nathan [Steward Observatory, University of Arizona, Tucson, AZ 85721 (United States)

    2015-01-10

    We present pre-explosion and post-explosion Hubble Space Telescope images of the Type Iax supernova (SN Iax) 2014dt in M61. After astrometrically aligning these images, we do not detect any stellar sources at the position of the SN in the pre-explosion images to relatively deep limits (3σ limits of M {sub F438W} > –5.0 mag and M {sub F814W} > –5.9 mag). These limits are similar to the luminosity of SN 2012Z's progenitor system (M {sub F435W} = –5.43 ± 0.15 and M {sub F814W} = –5.24 ± 0.16 mag), the only probable detected progenitor system in pre-explosion images of a SN Iax, and indeed, of any white-dwarf supernova. SN 2014dt is consistent with having a C/O white-dwarf primary/helium-star companion progenitor system, as was suggested for SN 2012Z, although perhaps with a slightly smaller or hotter donor. The data are also consistent with SN 2014dt having a low-mass red giant or main-sequence star companion. The data rule out main-sequence stars with M {sub init} ≳ 16 M {sub ☉} and most evolved stars with M {sub init} ≳ 8 M {sub ☉} as being the progenitor of SN 2014dt. Hot Wolf-Rayet stars are also allowed, but the lack of nearby bright sources makes this scenario unlikely. Because of its proximity (D = 12 Mpc), SN 2014dt is ideal for long-term monitoring, where images in ∼2 yr may detect the companion star or the luminous bound remnant of the progenitor white dwarf.

  17. JAMSTEC E-library of Deep-sea Images (J-EDI) Realizes a Virtual Journey to the Earth's Unexplored Deep Ocean

    Science.gov (United States)

    Sasaki, T.; Azuma, S.; Matsuda, S.; Nagayama, A.; Ogido, M.; Saito, H.; Hanafusa, Y.

    2016-12-01

    The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) archives a large amount of deep-sea research videos and photos obtained by JAMSTEC's research submersibles and vehicles with cameras. The web site "JAMSTEC E-library of Deep-sea Images : J-EDI" (http://www.godac.jamstec.go.jp/jedi/e/) has made videos and photos available to the public via the Internet since 2011. Users can search for target videos and photos by keywords, easy-to-understand icons, and dive information at J-EDI because operating staffs classify videos and photos as to contents, e.g. living organism and geological environment, and add comments to them.Dive survey data including videos and photos are not only valiant academically but also helpful for education and outreach activities. With the aim of the improvement of visibility for broader communities, we added new functions of 3-dimensional display synchronized various dive survey data with videos in this year.New Functions Users can search for dive survey data by 3D maps with plotted dive points using the WebGL virtual map engine "Cesium". By selecting a dive point, users can watch deep-sea videos and photos and associated environmental data, e.g. water temperature, salinity, rock and biological sample photos, obtained by the dive survey. Users can browse a dive track visualized in 3D virtual spaces using the WebGL JavaScript library. By synchronizing this virtual dive track with videos, users can watch deep-sea videos recorded at a point on a dive track. Users can play an animation which a submersible-shaped polygon automatically traces a 3D virtual dive track and displays of dive survey data are synchronized with tracing a dive track. Users can directly refer to additional information of other JAMSTEC data sites such as marine biodiversity database, marine biological sample database, rock sample database, and cruise and dive information database, on each page which a 3D virtual dive track is displayed. A 3D visualization of a dive

  18. Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images

    Directory of Open Access Journals (Sweden)

    Kuo Men

    2017-12-01

    Full Text Available BackgroundRadiotherapy is one of the main treatment methods for nasopharyngeal carcinoma (NPC. It requires exact delineation of the nasopharynx gross tumor volume (GTVnx, the metastatic lymph node gross tumor volume (GTVnd, the clinical target volume (CTV, and organs at risk in the planning computed tomography images. However, this task is time-consuming and operator dependent. In the present study, we developed an end-to-end deep deconvolutional neural network (DDNN for segmentation of these targets.MethodsThe proposed DDNN is an end-to-end architecture enabling fast training and testing. It consists of two important components: an encoder network and a decoder network. The encoder network was used to extract the visual features of a medical image and the decoder network was used to recover the original resolution by deploying deconvolution. A total of 230 patients diagnosed with NPC stage I or stage II were included in this study. Data from 184 patients were chosen randomly as a training set to adjust the parameters of DDNN, and the remaining 46 patients were the test set to assess the performance of the model. The Dice similarity coefficient (DSC was used to quantify the segmentation results of the GTVnx, GTVnd, and CTV. In addition, the performance of DDNN was compared with the VGG-16 model.ResultsThe proposed DDNN method outperformed the VGG-16 in all the segmentation. The mean DSC values of DDNN were 80.9% for GTVnx, 62.3% for the GTVnd, and 82.6% for CTV, whereas VGG-16 obtained 72.3, 33.7, and 73.7% for the DSC values, respectively.ConclusionDDNN can be used to segment the GTVnx and CTV accurately. The accuracy for the GTVnd segmentation was relatively low due to the considerable differences in its shape, volume, and location among patients. The accuracy is expected to increase with more training data and combination of MR images. In conclusion, DDNN has the potential to improve the consistency of contouring and streamline radiotherapy

  19. Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images.

    Science.gov (United States)

    Men, Kuo; Chen, Xinyuan; Zhang, Ye; Zhang, Tao; Dai, Jianrong; Yi, Junlin; Li, Yexiong

    2017-01-01

    Radiotherapy is one of the main treatment methods for nasopharyngeal carcinoma (NPC). It requires exact delineation of the nasopharynx gross tumor volume (GTVnx), the metastatic lymph node gross tumor volume (GTVnd), the clinical target volume (CTV), and organs at risk in the planning computed tomography images. However, this task is time-consuming and operator dependent. In the present study, we developed an end-to-end deep deconvolutional neural network (DDNN) for segmentation of these targets. The proposed DDNN is an end-to-end architecture enabling fast training and testing. It consists of two important components: an encoder network and a decoder network. The encoder network was used to extract the visual features of a medical image and the decoder network was used to recover the original resolution by deploying deconvolution. A total of 230 patients diagnosed with NPC stage I or stage II were included in this study. Data from 184 patients were chosen randomly as a training set to adjust the parameters of DDNN, and the remaining 46 patients were the test set to assess the performance of the model. The Dice similarity coefficient (DSC) was used to quantify the segmentation results of the GTVnx, GTVnd, and CTV. In addition, the performance of DDNN was compared with the VGG-16 model. The proposed DDNN method outperformed the VGG-16 in all the segmentation. The mean DSC values of DDNN were 80.9% for GTVnx, 62.3% for the GTVnd, and 82.6% for CTV, whereas VGG-16 obtained 72.3, 33.7, and 73.7% for the DSC values, respectively. DDNN can be used to segment the GTVnx and CTV accurately. The accuracy for the GTVnd segmentation was relatively low due to the considerable differences in its shape, volume, and location among patients. The accuracy is expected to increase with more training data and combination of MR images. In conclusion, DDNN has the potential to improve the consistency of contouring and streamline radiotherapy workflows, but careful human review and a

  20. Transform- and multi-domain deep learning for single-frame rapid autofocusing in whole slide imaging.

    Science.gov (United States)

    Jiang, Shaowei; Liao, Jun; Bian, Zichao; Guo, Kaikai; Zhang, Yongbing; Zheng, Guoan

    2018-04-01

    A whole slide imaging (WSI) system has recently been approved for primary diagnostic use in the US. The image quality and system throughput of WSI is largely determined by the autofocusing process. Traditional approaches acquire multiple images along the optical axis and maximize a figure of merit for autofocusing. Here we explore the use of deep convolution neural networks (CNNs) to predict the focal position of the acquired image without axial scanning. We investigate the autofocusing performance with three illumination settings: incoherent Kohler illumination, partially coherent illumination with two plane waves, and one-plane-wave illumination. We acquire ~130,000 images with different defocus distances as the training data set. Different defocus distances lead to different spatial features of the captured images. However, solely relying on the spatial information leads to a relatively bad performance of the autofocusing process. It is better to extract defocus features from transform domains of the acquired image. For incoherent illumination, the Fourier cutoff frequency is directly related to the defocus distance. Similarly, autocorrelation peaks are directly related to the defocus distance for two-plane-wave illumination. In our implementation, we use the spatial image, the Fourier spectrum, the autocorrelation of the spatial image, and combinations thereof as the inputs for the CNNs. We show that the information from the transform domains can improve the performance and robustness of the autofocusing process. The resulting focusing error is ~0.5 µm, which is within the 0.8-µm depth-of-field range. The reported approach requires little hardware modification for conventional WSI systems and the images can be captured on the fly without focus map surveying. It may find applications in WSI and time-lapse microscopy. The transform- and multi-domain approaches may also provide new insights for developing microscopy-related deep-learning networks. We have made

  1. Effects of semantic context on access to words of low imageability in deep-phonological dysphasia: a treatment case study.

    Science.gov (United States)

    McCarthy, Laura Mary; Kalinyak-Fliszar, Michelene; Kohen, Francine; Martin, Nadine

    2017-01-01

    Deep dysphasia is a relatively rare subcategory of aphasia, characterised by word repetition impairment and a profound auditory-verbal short-term memory (STM) limitation. Repetition of words is better than nonwords (lexicality effect) and better for high-image than low-image words (imageability effect). Another related language impairment profile is phonological dysphasia, which includes all of the characteristics of deep dysphasia except for the occurrence of semantic errors in single word repetition. The overlap in symptoms of deep and phonological dysphasia has led to the hypothesis that they share the same root cause, impaired maintenance of activated representation of words, but that they differ in severity of that impairment, with deep dysphasia being more severe. We report a single-subject multiple baseline, multiple probe treatment study of a person who presented with a pattern of repetition that was consistent with the continuum of deep-phonological dysphasia: imageability and lexicality effects in repetition of single and multiple words and semantic errors in repetition of multiple-word utterances. The aim of this treatment study was to improve access to and repetition of low-imageability words by embedding them in modifier-noun phrases that enhanced their imageability. The treatment involved repetition of abstract noun pairs. We created modifier-abstract noun phrases that increased the semantic and syntactic cohesiveness of the words in the pair. For example, the phrases "long distance" and "social exclusion" were developed to improve repetition of the abstract pair "distance-exclusion". The goal of this manipulation was to increase the probability of accessing lexical and semantic representations of abstract words in repetition by enriching their semantic -syntactic context. We predicted that this increase in accessibility would be maintained when the words were repeated as pairs, but without the contextual phrase. Treatment outcomes indicated that

  2. Exploiting Deep Matching and SAR Data for the Geo-Localization Accuracy Improvement of Optical Satellite Images

    Directory of Open Access Journals (Sweden)

    Nina Merkle

    2017-06-01

    Full Text Available Improving the geo-localization of optical satellite images is an important pre-processing step for many remote sensing tasks like monitoring by image time series or scene analysis after sudden events. These tasks require geo-referenced and precisely co-registered multi-sensor data. Images captured by the high resolution synthetic aperture radar (SAR satellite TerraSAR-X exhibit an absolute geo-location accuracy within a few decimeters. These images represent therefore a reliable source to improve the geo-location accuracy of optical images, which is in the order of tens of meters. In this paper, a deep learning-based approach for the geo-localization accuracy improvement of optical satellite images through SAR reference data is investigated. Image registration between SAR and optical images requires few, but accurate and reliable matching points. These are derived from a Siamese neural network. The network is trained using TerraSAR-X and PRISM image pairs covering greater urban areas spread over Europe, in order to learn the two-dimensional spatial shifts between optical and SAR image patches. Results confirm that accurate and reliable matching points can be generated with higher matching accuracy and precision with respect to state-of-the-art approaches.

  3. VizieR Online Data Catalog: HST/ACS Coma cluster survey. II. (Hammer+, 2010)

    NARCIS (Netherlands)

    Hammer, D.; Verdoes Kleijn, G.; Hoyos, C.; den Brok, M.; Balcells, M.; Ferguson, H. C.; Goudfrooij, P.; Carter, D.; Guzman, R.; Peletier, R. F.; Smith, R. J.; Graham, A. W.; Trentham, N.; Peng, E.; Puzia, T. H.; Lucey, J. R.; Jogee, S.; Aguerri, A. L.; Batcheldor, D.; Bridges, T. J.; Chiboucas, K.; Davies, J. I.; Del Burgo, C.; Erwin, P.; Hornschemeier, A.; Hudson, M. J.; Huxor, A.; Jenkins, L.; Karick, A.; Khosroshahi, H.; Kourkchi, E.; Komiyama, Y.; Lotz, J.; Marzke, R. O.; Marinova, I.; Matkovic, A.; Merritt, D.; Miller, B. W.; Miller, N. A.; Mobasher, B.; Mouhcine, M.; Okamura, S.; Percival, S.; Phillipps, S.; Poggianti, B. M.; Price, J.; Sharples, R. M.; Tully, R. B.; Valentijn, E.

    2010-01-01

    This data release contains catalogs for the ACS Images in F475W and F814W bands of 25 fields in the Coma cluster of galaxies. Each field is about 202x202arcsec. Please see the release notes for further details. (25 data files).

  4. The HST/ACS Coma Cluster Survey - VII. Structure and assembly of massive galaxies in the centre of the Coma cluster

    NARCIS (Netherlands)

    Weinzirl, Tim; Jogee, Shardha; Neistein, Eyal; Khochfar, Sadegh; Kormendy, John; Marinova, Irina; Hoyos, Carlos; Balcells, Marc; den Brok, Mark; Hammer, Derek; Peletier, Reynier F.; Kleijn, Gijs Verdoes; Carter, David; Goudfrooij, Paul; Lucey, John R.; Mobasher, Bahram; Trentham, Neil; Erwin, Peter; Puzia, Thomas

    2014-01-01

    We constrain the assembly history of galaxies in the projected central 0.5 Mpc of the Coma cluster by performing structural decomposition on 69 massive (M⋆ ≥ 109 M⊙) galaxies using high-resolution F814W images from the Hubble Space Telescope (HST) Treasury Survey of Coma. Each galaxy is modelled

  5. THE ACS LCID PROJECT. I. SHORT-PERIOD VARIABLES IN THE ISOLATED DWARF SPHEROIDAL GALAXIES CETUS AND TUCANA

    NARCIS (Netherlands)

    Bernard, Edouard J.; Monelli, Matteo; Gallart, Carme; Drozdovsky, Igor; Stetson, Peter B.; Aparicio, Antonio; Cassisi, Santi; Mayer, Lucio; Cole, Andrew A.; Hidalgo, Sebastian L.; Skillman, Evan D.; Tolstoy, Eline

    2009-01-01

    We present the first study of the variable star populations in the isolated dwarf spheroidal galaxies (dSphs) Cetus and Tucana. Based on Hubble Space Telescope images obtained with the Advanced Camera for Surveys in the F475W and F814W bands, we identified 180 and 371 variables in Cetus and Tucana,

  6. Discovery of a Supernova in HST imaging of the MACSJ0717 Frontier Field

    Science.gov (United States)

    Rodney, Steven A.; Lotz, Jennifer; Strolger, Louis-Gregory

    2013-10-01

    We report the discovery of a supernova (SN) in Hubble Space Telescope (HST) observations centered on the galaxy cluster MACSJ0717. It was discovered in the F814W (i) band of the Advanced Camera for Surveys (ACS), in observations that were collected as part of the ongoing HST Frontier Fields (HFF) program (PI:J.Lotz, HST PID 13498). The FrontierSN ID for this object is SN HFF13Zar (nicknamed "SN Zara").

  7. Radiation dose reduction in digital breast tomosynthesis (DBT) by means of deep-learning-based supervised image processing

    Science.gov (United States)

    Liu, Junchi; Zarshenas, Amin; Qadir, Ammar; Wei, Zheng; Yang, Limin; Fajardo, Laurie; Suzuki, Kenji

    2018-03-01

    To reduce cumulative radiation exposure and lifetime risks for radiation-induced cancer from breast cancer screening, we developed a deep-learning-based supervised image-processing technique called neural network convolution (NNC) for radiation dose reduction in DBT. NNC employed patched-based neural network regression in a convolutional manner to convert lower-dose (LD) to higher-dose (HD) tomosynthesis images. We trained our NNC with quarter-dose (25% of the standard dose: 12 mAs at 32 kVp) raw projection images and corresponding "teaching" higher-dose (HD) images (200% of the standard dose: 99 mAs at 32 kVp) of a breast cadaver phantom acquired with a DBT system (Selenia Dimensions, Hologic, CA). Once trained, NNC no longer requires HD images. It converts new LD images to images that look like HD images; thus the term "virtual" HD (VHD) images. We reconstructed tomosynthesis slices on a research DBT system. To determine a dose reduction rate, we acquired 4 studies of another test phantom at 4 different radiation doses (1.35, 2.7, 4.04, and 5.39 mGy entrance dose). Structural SIMilarity (SSIM) index was used to evaluate the image quality. For testing, we collected half-dose (50% of the standard dose: 32+/-14 mAs at 33+/-5 kVp) and full-dose (standard dose: 68+/-23 mAs at 33+/-5 kvp) images of 10 clinical cases with the DBT system at University of Iowa Hospitals and Clinics. NNC converted half-dose DBT images of 10 clinical cases to VHD DBT images that were equivalent to full dose DBT images. Our cadaver phantom experiment demonstrated 79% dose reduction.

  8. Detection of tuberculosis patterns in digital photographs of chest X-ray images using Deep Learning: feasibility study.

    Science.gov (United States)

    Becker, A S; Blüthgen, C; Phi van, V D; Sekaggya-Wiltshire, C; Castelnuovo, B; Kambugu, A; Fehr, J; Frauenfelder, T

    2018-03-01

    To evaluate the feasibility of Deep Learning-based detection and classification of pathological patterns in a set of digital photographs of chest X-ray (CXR) images of tuberculosis (TB) patients. In this prospective, observational study, patients with previously diagnosed TB were enrolled. Photographs of their CXRs were taken using a consumer-grade digital still camera. The images were stratified by pathological patterns into classes: cavity, consolidation, effusion, interstitial changes, miliary pattern or normal examination. Image analysis was performed with commercially available Deep Learning software in two steps. Pathological areas were first localised; detected areas were then classified. Detection was assessed using receiver operating characteristics (ROC) analysis, and classification using a confusion matrix. The study cohort was 138 patients with human immunodeficiency virus (HIV) and TB co-infection (median age 34 years, IQR 28-40); 54 patients were female. Localisation of pathological areas was excellent (area under the ROC curve 0.82). The software could perfectly distinguish pleural effusions from intraparenchymal changes. The most frequent misclassifications were consolidations as cavitations, and miliary patterns as interstitial patterns (and vice versa). Deep Learning analysis of CXR photographs is a promising tool. Further efforts are needed to build larger, high-quality data sets to achieve better diagnostic performance.

  9. Single-shot T2 mapping using overlapping-echo detachment planar imaging and a deep convolutional neural network.

    Science.gov (United States)

    Cai, Congbo; Wang, Chao; Zeng, Yiqing; Cai, Shuhui; Liang, Dong; Wu, Yawen; Chen, Zhong; Ding, Xinghao; Zhong, Jianhui

    2018-04-24

    An end-to-end deep convolutional neural network (CNN) based on deep residual network (ResNet) was proposed to efficiently reconstruct reliable T 2 mapping from single-shot overlapping-echo detachment (OLED) planar imaging. The training dataset was obtained from simulations that were carried out on SPROM (Simulation with PRoduct Operator Matrix) software developed by our group. The relationship between the original OLED image containing two echo signals and the corresponding T 2 mapping was learned by ResNet training. After the ResNet was trained, it was applied to reconstruct the T 2 mapping from simulation and in vivo human brain data. Although the ResNet was trained entirely on simulated data, the trained network was generalized well to real human brain data. The results from simulation and in vivo human brain experiments show that the proposed method significantly outperforms the echo-detachment-based method. Reliable T 2 mapping with higher accuracy is achieved within 30 ms after the network has been trained, while the echo-detachment-based OLED reconstruction method took approximately 2 min. The proposed method will facilitate real-time dynamic and quantitative MR imaging via OLED sequence, and deep convolutional neural network has the potential to reconstruct maps from complex MRI sequences efficiently. © 2018 International Society for Magnetic Resonance in Medicine.

  10. Automatic Semantic Segmentation of Brain Gliomas from MRI Images Using a Deep Cascaded Neural Network.

    Science.gov (United States)

    Cui, Shaoguo; Mao, Lei; Jiang, Jingfeng; Liu, Chang; Xiong, Shuyu

    2018-01-01

    Brain tumors can appear anywhere in the brain and have vastly different sizes and morphology. Additionally, these tumors are often diffused and poorly contrasted. Consequently, the segmentation of brain tumor and intratumor subregions using magnetic resonance imaging (MRI) data with minimal human interventions remains a challenging task. In this paper, we present a novel fully automatic segmentation method from MRI data containing in vivo brain gliomas. This approach can not only localize the entire tumor region but can also accurately segment the intratumor structure. The proposed work was based on a cascaded deep learning convolutional neural network consisting of two subnetworks: (1) a tumor localization network (TLN) and (2) an intratumor classification network (ITCN). The TLN, a fully convolutional network (FCN) in conjunction with the transfer learning technology, was used to first process MRI data. The goal of the first subnetwork was to define the tumor region from an MRI slice. Then, the ITCN was used to label the defined tumor region into multiple subregions. Particularly, ITCN exploited a convolutional neural network (CNN) with deeper architecture and smaller kernel. The proposed approach was validated on multimodal brain tumor segmentation (BRATS 2015) datasets, which contain 220 high-grade glioma (HGG) and 54 low-grade glioma (LGG) cases. Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity were used as evaluation metrics. Our experimental results indicated that our method could obtain the promising segmentation results and had a faster segmentation speed. More specifically, the proposed method obtained comparable and overall better DSC values (0.89, 0.77, and 0.80) on the combined (HGG + LGG) testing set, as compared to other methods reported in the literature. Additionally, the proposed approach was able to complete a segmentation task at a rate of 1.54 seconds per slice.

  11. Development of a deep convolutional neural network to predict grading of canine meningiomas from magnetic resonance images.

    Science.gov (United States)

    Banzato, T; Cherubini, G B; Atzori, M; Zotti, A

    2018-05-01

    An established deep neural network (DNN) based on transfer learning and a newly designed DNN were tested to predict the grade of meningiomas from magnetic resonance (MR) images in dogs and to determine the accuracy of classification of using pre- and post-contrast T1-weighted (T1W), and T2-weighted (T2W) MR images. The images were randomly assigned to a training set, a validation set and a test set, comprising 60%, 10% and 30% of images, respectively. The combination of DNN and MR sequence displaying the highest discriminating accuracy was used to develop an image classifier to predict the grading of new cases. The algorithm based on transfer learning using the established DNN did not provide satisfactory results, whereas the newly designed DNN had high classification accuracy. On the basis of classification accuracy, an image classifier built on the newly designed DNN using post-contrast T1W images was developed. This image classifier correctly predicted the grading of 8 out of 10 images not included in the data set. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Beyond Retinal Layers: A Deep Voting Model for Automated Geographic Atrophy Segmentation in SD-OCT Images.

    Science.gov (United States)

    Ji, Zexuan; Chen, Qiang; Niu, Sijie; Leng, Theodore; Rubin, Daniel L

    2018-01-01

    To automatically and accurately segment geographic atrophy (GA) in spectral-domain optical coherence tomography (SD-OCT) images by constructing a voting system with deep neural networks without the use of retinal layer segmentation. An automatic GA segmentation method for SD-OCT images based on the deep network was constructed. The structure of the deep network was composed of five layers, including one input layer, three hidden layers, and one output layer. During the training phase, the labeled A-scans with 1024 features were directly fed into the network as the input layer to obtain the deep representations. Then a soft-max classifier was trained to determine the label of each individual pixel. Finally, a voting decision strategy was used to refine the segmentation results among 10 trained models. Two image data sets with GA were used to evaluate the model. For the first dataset, our algorithm obtained a mean overlap ratio (OR) 86.94% ± 8.75%, absolute area difference (AAD) 11.49% ± 11.50%, and correlation coefficients (CC) 0.9857; for the second dataset, the mean OR, AAD, and CC of the proposed method were 81.66% ± 10.93%, 8.30% ± 9.09%, and 0.9952, respectively. The proposed algorithm was capable of improving over 5% and 10% segmentation accuracy, respectively, when compared with several state-of-the-art algorithms on two data sets. Without retinal layer segmentation, the proposed algorithm could produce higher segmentation accuracy and was more stable when compared with state-of-the-art methods that relied on retinal layer segmentation results. Our model may provide reliable GA segmentations from SD-OCT images and be useful in the clinical diagnosis of advanced nonexudative AMD. Based on the deep neural networks, this study presents an accurate GA segmentation method for SD-OCT images without using any retinal layer segmentation results, and may contribute to improved understanding of advanced nonexudative AMD.

  13. Differentiation between Superficial and Deep Lobe Parotid Tumors by Magnetic Resonance Imaging: Usefulness of the Parotid Duct Criterion

    International Nuclear Information System (INIS)

    Imaizumi, A.; Kuribayashi, A.; Okochi, K.; Yoshino, N.; Kurabayashi, T.; Ishii, J.; Sumi, Y.

    2009-01-01

    Background: The location of a parotid tumor affects the choice of surgery, and there is a risk of damaging the facial nerve during surgery. Thus, differentiation between superficial and deep lobe parotid tumors is important for appropriate surgical planning. Purpose: To evaluate the usefulness of using the parotid duct, in addition to the retromandibular vein, for differentiating between superficial and deep lobe parotid tumors on MR images. Material and Methods: Magnetic resonance images of 42 parotid tumors in 40 patients were reviewed to determine whether the tumor was located in the superficial or deep lobe. In each case, the retromandibular vein and the parotid duct were used to locate the tumor. The parotid duct was only used in cases where the tumor and the duct were visualized on the same image. Results: Using the retromandibular vein criterion, 71% of deep lobe and 86% of superficial lobe tumors were correctly diagnosed, providing an accuracy of 81%. However, the accuracy achieved when using the parotid duct criterion was 100%, although it could be applied to only 28 of the 42 cases. Based on these results, we defined the following diagnostic method: the parotid duct criterion is first applied, and for cases in which it cannot be applied, the retromandibular vein criterion is used. The accuracy of this method was 88%, which was better than that achieved using the retromandibular vein criterion alone. Conclusion: The parotid duct criterion is useful for determining the location of parotid tumors. Combining the parotid duct criterion with the retromandibular vein criterion might improve the diagnostic accuracy of parotid tumor location compared to using the latter criterion alone

  14. Differentiation between Superficial and Deep Lobe Parotid Tumors by Magnetic Resonance Imaging: Usefulness of the Parotid Duct Criterion

    Energy Technology Data Exchange (ETDEWEB)

    Imaizumi, A.; Kuribayashi, A.; Okochi, K.; Yoshino, N.; Kurabayashi, T. (Oral and Maxillofacial Radiology, Graduate School, Tokyo Medical and Dental Univ., Tokyo (Japan)); Ishii, J. (Maxillofacial Surgery, Graduate School, Tokyo Medical and Dental Univ., Tokyo (Japan)); Sumi, Y. (Division of Oral and Dental Surgery, Dept. of Advanced Medicine, National Center for Geriatrics and Gerontology, Aichi (Japan))

    2009-08-15

    Background: The location of a parotid tumor affects the choice of surgery, and there is a risk of damaging the facial nerve during surgery. Thus, differentiation between superficial and deep lobe parotid tumors is important for appropriate surgical planning. Purpose: To evaluate the usefulness of using the parotid duct, in addition to the retromandibular vein, for differentiating between superficial and deep lobe parotid tumors on MR images. Material and Methods: Magnetic resonance images of 42 parotid tumors in 40 patients were reviewed to determine whether the tumor was located in the superficial or deep lobe. In each case, the retromandibular vein and the parotid duct were used to locate the tumor. The parotid duct was only used in cases where the tumor and the duct were visualized on the same image. Results: Using the retromandibular vein criterion, 71% of deep lobe and 86% of superficial lobe tumors were correctly diagnosed, providing an accuracy of 81%. However, the accuracy achieved when using the parotid duct criterion was 100%, although it could be applied to only 28 of the 42 cases. Based on these results, we defined the following diagnostic method: the parotid duct criterion is first applied, and for cases in which it cannot be applied, the retromandibular vein criterion is used. The accuracy of this method was 88%, which was better than that achieved using the retromandibular vein criterion alone. Conclusion: The parotid duct criterion is useful for determining the location of parotid tumors. Combining the parotid duct criterion with the retromandibular vein criterion might improve the diagnostic accuracy of parotid tumor location compared to using the latter criterion alone

  15. Scattering Operator and Spectral Clustering for Ultrasound Images: Application on Deep Venous Thrombi

    OpenAIRE

    Thibaud Berthomier; Ali Mansour; Luc Bressollette; Frédéric Le Roy; Dominique Mottier; Léo Fréchier; Barthélémy Hermenault

    2017-01-01

    Deep Venous Thrombosis (DVT) occurs when a thrombus is formed within a deep vein (most often in the legs). This disease can be deadly if a part or the whole thrombus reaches the lung and causes a Pulmonary Embolism (PE). This disorder, often asymptomatic, has multifactorial causes: immobilization, surgery, pregnancy, age, cancers, and genetic variations. Our project aims to relate the thrombus epidemiology (origins, patient predispositions, PE) to its structure using ultr...

  16. Image inpainting and super-resolution using non-local recursive deep convolutional network with skip connections

    Science.gov (United States)

    Liu, Miaofeng

    2017-07-01

    In recent years, deep convolutional neural networks come into use in image inpainting and super-resolution in many fields. Distinct to most of the former methods requiring to know beforehand the local information for corrupted pixels, we propose a 20-depth fully convolutional network to learn an end-to-end mapping a dataset of damaged/ground truth subimage pairs realizing non-local blind inpainting and super-resolution. As there often exist image with huge corruptions or inpainting on a low-resolution image that the existing approaches unable to perform well, we also share parameters in local area of layers to achieve spatial recursion and enlarge the receptive field. To avoid the difficulty of training this deep neural network, skip-connections between symmetric convolutional layers are designed. Experimental results shows that the proposed method outperforms state-of-the-art methods for diverse corrupting and low-resolution conditions, it works excellently when realizing super-resolution and image inpainting simultaneously

  17. Effect of pathological heterogeneity on shear wave elasticity imaging in the staging of deep venous thrombosis.

    Directory of Open Access Journals (Sweden)

    Xiaona Liu

    Full Text Available We aimed to observe the relationship between the pathological components of a deep venous thrombus (DVT, which was divided into three parts, and the findings on quantitative ultrasonic shear wave elastography (SWE to increase the accuracy of thrombus staging in a rabbit model.A flow stenosis-induced vein thrombosis model was used, and the thrombus was divided into three parts (head, body and tail, which were associated with corresponding observation points. Elasticity was quantified in vivo using SWE over a 2-week period. A quantitative pathologic image analysis (QPIA was performed to obtain the relative percentages of the components of the main clots.DVT maturity occurred at 2 weeks, and the elasticity of the whole thrombus and the three parts (head, body and tail showed an increasing trend, with the Young's modulus values varying from 2.36 ± 0.41 kPa to 13.24 ± 1.71 kPa; 2.01 ± 0.28 kPa to 13.29 ± 1.48 kPa; 3.27 ± 0.57 kPa to 15.91 ± 2.05 kPa; and 1.79 ± 0.36 kPa to 10.51 ± 1.61 kPa, respectively. Significant increases occurred on different days for the different parts: the head showed significant increases on days 4 and 6; the body showed significant increases on days 4 and 7; and the tail showed significant increases on days 3 and 6. The QPIA showed that the thrombus composition changed dynamically as the thrombus matured, with the fibrin and calcium salt deposition gradually increasing and the red blood cells (RBCs and platelet trabecula gradually decreasing. Significant changes were observed on days 4 and 7, which may represent the transition points for acute, sub-acute and chronic thrombi. Significant heterogeneity was observed between and within the thrombi.Variations in the thrombus components were generally consistent between the SWE and QPIA. Days 4 and 7 after thrombus induction may represent the transition points for acute, sub-acute and chronic thrombi in rabbit models. A dynamic examination of the same part of the thrombus

  18. Effect of pathological heterogeneity on shear wave elasticity imaging in the staging of deep venous thrombosis.

    Science.gov (United States)

    Liu, Xiaona; Li, Na; Wen, Chaoyang

    2017-01-01

    We aimed to observe the relationship between the pathological components of a deep venous thrombus (DVT), which was divided into three parts, and the findings on quantitative ultrasonic shear wave elastography (SWE) to increase the accuracy of thrombus staging in a rabbit model. A flow stenosis-induced vein thrombosis model was used, and the thrombus was divided into three parts (head, body and tail), which were associated with corresponding observation points. Elasticity was quantified in vivo using SWE over a 2-week period. A quantitative pathologic image analysis (QPIA) was performed to obtain the relative percentages of the components of the main clots. DVT maturity occurred at 2 weeks, and the elasticity of the whole thrombus and the three parts (head, body and tail) showed an increasing trend, with the Young's modulus values varying from 2.36 ± 0.41 kPa to 13.24 ± 1.71 kPa; 2.01 ± 0.28 kPa to 13.29 ± 1.48 kPa; 3.27 ± 0.57 kPa to 15.91 ± 2.05 kPa; and 1.79 ± 0.36 kPa to 10.51 ± 1.61 kPa, respectively. Significant increases occurred on different days for the different parts: the head showed significant increases on days 4 and 6; the body showed significant increases on days 4 and 7; and the tail showed significant increases on days 3 and 6. The QPIA showed that the thrombus composition changed dynamically as the thrombus matured, with the fibrin and calcium salt deposition gradually increasing and the red blood cells (RBCs) and platelet trabecula gradually decreasing. Significant changes were observed on days 4 and 7, which may represent the transition points for acute, sub-acute and chronic thrombi. Significant heterogeneity was observed between and within the thrombi. Variations in the thrombus components were generally consistent between the SWE and QPIA. Days 4 and 7 after thrombus induction may represent the transition points for acute, sub-acute and chronic thrombi in rabbit models. A dynamic examination of the same part of the thrombus may be

  19. Imaging the Variscan suture at the KTB deep drilling site, Germany

    Science.gov (United States)

    Bianchi, Irene; Bokelmann, Götz

    2018-06-01

    The upper crust of the KTB (Kontinentales Tiefbohrprogramm) area in the Southeastern Germany is a focal point for the Earth Science community due to the huge amount of information collected throughout the last 30 yr. In this study, we explore the crustal structure of the KTB area through the application of the Receiver Function (RF) technique to a new data set recorded by nine temporary seismic stations and one permanent station. We aim to unravel the isotropic structure and compare our results with previous information from the reflection profiles collected during the initial site investigations. Due to the large amount of information collected by previous studies, in terms of P-wave velocity, depth and location of major reflectors, depth reconstruction of major faults zones, this area represents a unique occasion to test the resolution capability of a passive seismological study performed by the application of the RF. We aim to verify which contribution could be given by the application of the RF technique, for future studies, in order to get clear images of the deep structure and up to which resolution. The RF technique has apparently not been applied in the area before, yet it may give useful additional insight in subsurface structure, particularly at depths larger than the maximum depth reached by drilling, but also on structures in the upper crust, around the area that has been studied in detail previously. In our results vS-depth profiles for stations located on the same geological units display common features and show shallow S-wave velocities typical of the outcropping geological units (i.e. sedimentary basin, granites and metamorphic rocks). At around 10 km depth, we observe a strong velocity increase beneath all stations. For the stations located in the centre of the area, this variation is weaker, which we assume to be the signature of the main tectonic suture in the area (i.e. the Saxothuringian-Moldanubian suture), along a west-to-east extended

  20. A deep learning approach to estimate chemically-treated collagenous tissue nonlinear anisotropic stress-strain responses from microscopy images.

    Science.gov (United States)

    Liang, Liang; Liu, Minliang; Sun, Wei

    2017-11-01

    Biological collagenous tissues comprised of networks of collagen fibers are suitable for a broad spectrum of medical applications owing to their attractive mechanical properties. In this study, we developed a noninvasive approach to estimate collagenous tissue elastic properties directly from microscopy images using Machine Learning (ML) techniques. Glutaraldehyde-treated bovine pericardium (GLBP) tissue, widely used in the fabrication of bioprosthetic heart valves and vascular patches, was chosen to develop a representative application. A Deep Learning model was designed and trained to process second harmonic generation (SHG) images of collagen networks in GLBP tissue samples, and directly predict the tissue elastic mechanical properties. The trained model is capable of identifying the overall tissue stiffness with a classification accuracy of 84%, and predicting the nonlinear anisotropic stress-strain curves with average regression errors of 0.021 and 0.031. Thus, this study demonstrates the feasibility and great potential of using the Deep Learning approach for fast and noninvasive assessment of collagenous tissue elastic properties from microstructural images. In this study, we developed, to our best knowledge, the first Deep Learning-based approach to estimate the elastic properties of collagenous tissues directly from noninvasive second harmonic generation images. The success of this study holds promise for the use of Machine Learning techniques to noninvasively and efficiently estimate the mechanical properties of many structure-based biological materials, and it also enables many potential applications such as serving as a quality control tool to select tissue for the manufacturing of medical devices (e.g. bioprosthetic heart valves). Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  1. Automated Grading of Gliomas using Deep Learning in Digital Pathology Images: A modular approach with ensemble of convolutional neural networks.

    Science.gov (United States)

    Ertosun, Mehmet Günhan; Rubin, Daniel L

    2015-01-01

    Brain glioma is the most common primary malignant brain tumors in adults with different pathologic subtypes: Lower Grade Glioma (LGG) Grade II, Lower Grade Glioma (LGG) Grade III, and Glioblastoma Multiforme (GBM) Grade IV. The survival and treatment options are highly dependent of this glioma grade. We propose a deep learning-based, modular classification pipeline for automated grading of gliomas using digital pathology images. Whole tissue digitized images of pathology slides obtained from The Cancer Genome Atlas (TCGA) were used to train our deep learning modules. Our modular pipeline provides diagnostic quality statistics, such as precision, sensitivity and specificity, of the individual deep learning modules, and (1) facilitates training given the limited data in this domain, (2) enables exploration of different deep learning structures for each module, (3) leads to developing less complex modules that are simpler to analyze, and (4) provides flexibility, permitting use of single modules within the framework or use of other modeling or machine learning applications, such as probabilistic graphical models or support vector machines. Our modular approach helps us meet the requirements of minimum accuracy levels that are demanded by the context of different decision points within a multi-class classification scheme. Convolutional Neural Networks are trained for each module for each sub-task with more than 90% classification accuracies on validation data set, and achieved classification accuracy of 96% for the task of GBM vs LGG classification, 71% for further identifying the grade of LGG into Grade II or Grade III on independent data set coming from new patients from the multi-institutional repository.

  2. Multi-categorical deep learning neural network to classify retinal images: A pilot study employing small database.

    Science.gov (United States)

    Choi, Joon Yul; Yoo, Tae Keun; Seo, Jeong Gi; Kwak, Jiyong; Um, Terry Taewoong; Rim, Tyler Hyungtaek

    2017-01-01

    Deep learning emerges as a powerful tool for analyzing medical images. Retinal disease detection by using computer-aided diagnosis from fundus image has emerged as a new method. We applied deep learning convolutional neural network by using MatConvNet for an automated detection of multiple retinal diseases with fundus photographs involved in STructured Analysis of the REtina (STARE) database. Dataset was built by expanding data on 10 categories, including normal retina and nine retinal diseases. The optimal outcomes were acquired by using a random forest transfer learning based on VGG-19 architecture. The classification results depended greatly on the number of categories. As the number of categories increased, the performance of deep learning models was diminished. When all 10 categories were included, we obtained results with an accuracy of 30.5%, relative classifier information (RCI) of 0.052, and Cohen's kappa of 0.224. Considering three integrated normal, background diabetic retinopathy, and dry age-related macular degeneration, the multi-categorical classifier showed accuracy of 72.8%, 0.283 RCI, and 0.577 kappa. In addition, several ensemble classifiers enhanced the multi-categorical classification performance. The transfer learning incorporated with ensemble classifier of clustering and voting approach presented the best performance with accuracy of 36.7%, 0.053 RCI, and 0.225 kappa in the 10 retinal diseases classification problem. First, due to the small size of datasets, the deep learning techniques in this study were ineffective to be applied in clinics where numerous patients suffering from various types of retinal disorders visit for diagnosis and treatment. Second, we found that the transfer learning incorporated with ensemble classifiers can improve the classification performance in order to detect multi-categorical retinal diseases. Further studies should confirm the effectiveness of algorithms with large datasets obtained from hospitals.

  3. Setup error and motion during deep inspiration breath-hold breast radiotherapy measured with continuous portal imaging

    DEFF Research Database (Denmark)

    Lutz, Christina Maria; Poulsen, Per Rugaard; Fledelius, Walther

    2016-01-01

    BACKGROUND: The position and residual motion of the chest wall of breast cancer patients during treatment in deep inspiration breath-hold (DIBH) were investigated. MATERIAL AND METHODS: The study included 58 left-sided breast cancer patients treated with DIBH three-dimensional (3D) conformal......). At every third treatment fraction, continuous portal images were acquired. The time-resolved chest wall position during treatment was compared with the planned position to determine the inter-fraction setup errors and the intra-fraction motion of the chest wall. RESULTS: The DIBH compliance was 95% during...

  4. The role of molecular imaging in diagnosis of deep vein thrombosis

    DEFF Research Database (Denmark)

    Houshmand, Sina; Salavati, Ali; Hess, Søren

    2014-01-01

    Venous thromboembolism (VTE) mostly presenting as deep venous thrombosis (DVT) and pulmonary embolism (PE) affects up to 600,000 individuals in United States each year. Clinical symptoms of VTE are nonspecific and sometimes misleading. Additionally, side effects of available treatment plans for D...

  5. Multimodality imaging in the diagnosis of deep vein thrombosis and popliteal pseudoaneurysm complicating a sessile osteochondroma

    Energy Technology Data Exchange (ETDEWEB)

    Christensen, Jared D.; Monu, Johnny U.V. [University of Rochester School of Medicine and Dentistry, Department of Imaging Sciences, 601 Elmwood Ave., Box 648, Rochester, NY (United States)

    2008-08-15

    Synergistic use of ultrasonography, radiography, multidetector CT (MDCT) and MRI enabled a prompt and accurate diagnosis of a nonocclusive popliteal vein thrombus (deep venous thrombosis, DVT) and a pseudoaneurysm complicating a sessile osteochondroma in an 11-year-old boy who presented in the emergency department with sudden-onset nontraumatic pain in the posterior aspect of the knee. (orig.)

  6. Reflection imaging of the Moon's interior using deep-moonquake seismic interferometry

    NARCIS (Netherlands)

    Nishitsuji, Y.; Rowe, CA; Wapenaar, C.P.A.; Draganov, D.S.

    2016-01-01

    The internal structure of the Moon has been investigated over many years using a variety of seismic methods, such as travel time analysis, receiver functions, and tomography. Here we propose to apply body-wave seismic interferometry to deep moonquakes in order to retrieve zero-offset reflection

  7. ISTA-Net: Iterative Shrinkage-Thresholding Algorithm Inspired Deep Network for Image Compressive Sensing

    KAUST Repository

    Zhang, Jian; Ghanem, Bernard

    2017-01-01

    and the performance/speed of network-based ones. We propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $l_1$ norm CS reconstruction model. ISTA-Net essentially

  8. Compression-induced deep tissue injury examined with magnetic resonance imaging and histology

    NARCIS (Netherlands)

    Stekelenburg, A.; Oomens, C. W. J.; Strijkers, G. J.; Nicolay, K.; Bader, D. L.

    2006-01-01

    The underlying mechanisms leading to deep tissue injury after sustained compressive loading are not well understood. It is hypothesized that initial damage to muscle fibers is induced mechanically by local excessive deformation. Therefore, in this study, an animal model was used to study early

  9. Global imaging of the Earth's deep interior: seismic constraints on (an)isotropy, density and attenuation

    NARCIS (Netherlands)

    Trampert, J.; Fichtner, A.

    2013-01-01

    Seismic tomography is the principal tool to probe the deep interior of the Earth. Models of seismic anisotropy induced by crystal alignment provide insight into the underlying convective motion, and variations of density allow us to discriminate between thermal and compositional heterogeneities.

  10. SU-C-207B-07: Deep Convolutional Neural Network Image Matching for Ultrasound Guidance in Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, N; Najafi, M; Hancock, S; Hristov, D [Stanford University Cancer Center, Palo Alto, CA (United States)

    2016-06-15

    Purpose: Robust matching of ultrasound images is a challenging problem as images of the same anatomy often present non-trivial differences. This poses an obstacle for ultrasound guidance in radiotherapy. Thus our objective is to overcome this obstacle by designing and evaluating an image blocks matching framework based on a two channel deep convolutional neural network. Methods: We extend to 3D an algorithmic structure previously introduced for 2D image feature learning [1]. To obtain the similarity between two 3D image blocks A and B, the 3D image blocks are divided into 2D patches Ai and Bi. The similarity is then calculated as the average similarity score of Ai and Bi. The neural network was then trained with public non-medical image pairs, and subsequently evaluated on ultrasound image blocks for the following scenarios: (S1) same image blocks with/without shifts (A and A-shift-x); (S2) non-related random block pairs; (S3) ground truth registration matched pairs of different ultrasound images with/without shifts (A-i and A-reg-i-shift-x). Results: For S1 the similarity scores of A and A-shift-x were 32.63, 18.38, 12.95, 9.23, 2.15 and 0.43 for x=ranging from 0 mm to 10 mm in 2 mm increments. For S2 the average similarity score for non-related block pairs was −1.15. For S3 the average similarity score of ground truth registration matched blocks A-i and A-reg-i-shift-0 (1≤i≤5) was 12.37. After translating A-reg-i-shift-0 by 0 mm, 2 mm, 4 mm, 6 mm, 8 mm, and 10 mm, the average similarity scores of A-i and A-reg-i-shift-x were 11.04, 8.42, 4.56, 2.27, and 0.29 respectively. Conclusion: The proposed method correctly assigns highest similarity to corresponding 3D ultrasound image blocks despite differences in image content and thus can form the basis for ultrasound image registration and tracking.[1] Zagoruyko, Komodakis, “Learning to compare image patches via convolutional neural networks', IEEE CVPR 2015,pp.4353–4361.

  11. SU-C-207B-07: Deep Convolutional Neural Network Image Matching for Ultrasound Guidance in Radiotherapy

    International Nuclear Information System (INIS)

    Zhu, N; Najafi, M; Hancock, S; Hristov, D

    2016-01-01

    Purpose: Robust matching of ultrasound images is a challenging problem as images of the same anatomy often present non-trivial differences. This poses an obstacle for ultrasound guidance in radiotherapy. Thus our objective is to overcome this obstacle by designing and evaluating an image blocks matching framework based on a two channel deep convolutional neural network. Methods: We extend to 3D an algorithmic structure previously introduced for 2D image feature learning [1]. To obtain the similarity between two 3D image blocks A and B, the 3D image blocks are divided into 2D patches Ai and Bi. The similarity is then calculated as the average similarity score of Ai and Bi. The neural network was then trained with public non-medical image pairs, and subsequently evaluated on ultrasound image blocks for the following scenarios: (S1) same image blocks with/without shifts (A and A-shift-x); (S2) non-related random block pairs; (S3) ground truth registration matched pairs of different ultrasound images with/without shifts (A-i and A-reg-i-shift-x). Results: For S1 the similarity scores of A and A-shift-x were 32.63, 18.38, 12.95, 9.23, 2.15 and 0.43 for x=ranging from 0 mm to 10 mm in 2 mm increments. For S2 the average similarity score for non-related block pairs was −1.15. For S3 the average similarity score of ground truth registration matched blocks A-i and A-reg-i-shift-0 (1≤i≤5) was 12.37. After translating A-reg-i-shift-0 by 0 mm, 2 mm, 4 mm, 6 mm, 8 mm, and 10 mm, the average similarity scores of A-i and A-reg-i-shift-x were 11.04, 8.42, 4.56, 2.27, and 0.29 respectively. Conclusion: The proposed method correctly assigns highest similarity to corresponding 3D ultrasound image blocks despite differences in image content and thus can form the basis for ultrasound image registration and tracking.[1] Zagoruyko, Komodakis, “Learning to compare image patches via convolutional neural networks', IEEE CVPR 2015,pp.4353–4361.

  12. Satellite Image Classification of Building Damages Using Airborne and Satellite Image Samples in a Deep Learning Approach

    Science.gov (United States)

    Duarte, D.; Nex, F.; Kerle, N.; Vosselman, G.

    2018-05-01

    The localization and detailed assessment of damaged buildings after a disastrous event is of utmost importance to guide response operations, recovery tasks or for insurance purposes. Several remote sensing platforms and sensors are currently used for the manual detection of building damages. However, there is an overall interest in the use of automated methods to perform this task, regardless of the used platform. Owing to its synoptic coverage and predictable availability, satellite imagery is currently used as input for the identification of building damages by the International Charter, as well as the Copernicus Emergency Management Service for the production of damage grading and reference maps. Recently proposed methods to perform image classification of building damages rely on convolutional neural networks (CNN). These are usually trained with only satellite image samples in a binary classification problem, however the number of samples derived from these images is often limited, affecting the quality of the classification results. The use of up/down-sampling image samples during the training of a CNN, has demonstrated to improve several image recognition tasks in remote sensing. However, it is currently unclear if this multi resolution information can also be captured from images with different spatial resolutions like satellite and airborne imagery (from both manned and unmanned platforms). In this paper, a CNN framework using residual connections and dilated convolutions is used considering both manned and unmanned aerial image samples to perform the satellite image classification of building damages. Three network configurations, trained with multi-resolution image samples are compared against two benchmark networks where only satellite image samples are used. Combining feature maps generated from airborne and satellite image samples, and refining these using only the satellite image samples, improved nearly 4 % the overall satellite image

  13. Motor and Nonmotor Circuitry Activation Induced by Subthalamic Nucleus Deep Brain Stimulation in Patients With Parkinson Disease: Intraoperative Functional Magnetic Resonance Imaging for Deep Brain Stimulation.

    Science.gov (United States)

    Knight, Emily J; Testini, Paola; Min, Hoon-Ki; Gibson, William S; Gorny, Krzysztof R; Favazza, Christopher P; Felmlee, Joel P; Kim, Inyong; Welker, Kirk M; Clayton, Daniel A; Klassen, Bryan T; Chang, Su-youne; Lee, Kendall H

    2015-06-01

    To test the hypothesis suggested by previous studies that subthalamic nucleus (STN) deep brain stimulation (DBS) in patients with Parkinson disease would affect the activity of motor and nonmotor networks, we applied intraoperative functional magnetic resonance imaging (fMRI) to patients receiving DBS. Ten patients receiving STN DBS for Parkinson disease underwent intraoperative 1.5-T fMRI during high-frequency stimulation delivered via an external pulse generator. The study was conducted between January 1, 2013, and September 30, 2014. We observed blood oxygen level-dependent (BOLD) signal changes (false discovery rate <0.001) in the motor circuitry (including the primary motor, premotor, and supplementary motor cortices; thalamus; pedunculopontine nucleus; and cerebellum) and in the limbic circuitry (including the cingulate and insular cortices). Activation of the motor network was observed also after applying a Bonferroni correction (P<.001) to the data set, suggesting that across patients, BOLD changes in the motor circuitry are more consistent compared with those occurring in the nonmotor network. These findings support the modulatory role of STN DBS on the activity of motor and nonmotor networks and suggest complex mechanisms as the basis of the efficacy of this treatment modality. Furthermore, these results suggest that across patients, BOLD changes in the motor circuitry are more consistent than those in the nonmotor network. With further studies combining the use of real-time intraoperative fMRI with clinical outcomes in patients treated with DBS, functional imaging techniques have the potential not only to elucidate the mechanisms of DBS functioning but also to guide and assist in the surgical treatment of patients affected by movement and neuropsychiatric disorders. clinicaltrials.gov Identifier: NCT01809613. Copyright © 2015 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  14. Learning scale-variant and scale-invariant features for deep image classification

    NARCIS (Netherlands)

    van Noord, Nanne; Postma, Eric

    Convolutional Neural Networks (CNNs) require large image corpora to be trained on classification tasks. The variation in image resolutions, sizes of objects and patterns depicted, and image scales, hampers CNN training and performance, because the task-relevant information varies over spatial

  15. Improved contrast deep optoacoustic imaging using displacement-compensated averaging: breast tumour phantom studies

    Energy Technology Data Exchange (ETDEWEB)

    Jaeger, M; Preisser, S; Kitz, M; Frenz, M [Institute of Applied Physics, University of Bern, Sidlerstrasse 5, CH-3012 Bern (Switzerland); Ferrara, D; Senegas, S; Schweizer, D, E-mail: frenz@iap.unibe.ch [Fukuda Denshi Switzerland AG, Reinacherstrasse 131, CH-4002 Basel (Switzerland)

    2011-09-21

    For real-time optoacoustic (OA) imaging of the human body, a linear array transducer and reflection mode optical irradiation is usually preferred. Such a setup, however, results in significant image background, which prevents imaging structures at the ultimate depth determined by the light distribution and the signal noise level. Therefore, we previously proposed a method for image background reduction, based on displacement-compensated averaging (DCA) of image series obtained when the tissue sample under investigation is gradually deformed. OA signals and background signals are differently affected by the deformation and can thus be distinguished. The proposed method is now experimentally applied to image artificial tumours embedded inside breast phantoms. OA images are acquired alternately with pulse-echo images using a combined OA/echo-ultrasound device. Tissue deformation is accessed via speckle tracking in pulse echo images, and used to compensate in the OA images for the local tissue displacement. In that way, OA sources are highly correlated between subsequent images, while background is decorrelated and can therefore be reduced by averaging. We show that image contrast in breast phantoms is strongly improved and detectability of embedded tumours significantly increased, using the DCA method.

  16. A deep learning approach for detecting and correcting highlights in endoscopic images

    NARCIS (Netherlands)

    Rodriguez-Sanchez, Antonio; Chea, Daly; Azzopardi, George; Stabinger, Sebastian

    2017-01-01

    The image of an object changes dramatically depending on the lightning conditions surrounding that object. Shadows, reflections and highlights can make the object very difficult to be recognized for an automatic system. Additionally, images used in medical applications, such as endoscopic images and

  17. High-Resolution Ultrasound-Switchable Fluorescence Imaging in Centimeter-Deep Tissue Phantoms with High Signal-To-Noise Ratio and High Sensitivity via Novel Contrast Agents.

    Science.gov (United States)

    Cheng, Bingbing; Bandi, Venugopal; Wei, Ming-Yuan; Pei, Yanbo; D'Souza, Francis; Nguyen, Kytai T; Hong, Yi; Yuan, Baohong

    2016-01-01

    For many years, investigators have sought after high-resolution fluorescence imaging in centimeter-deep tissue because many interesting in vivo phenomena-such as the presence of immune system cells, tumor angiogenesis, and metastasis-may be located deep in tissue. Previously, we developed a new imaging technique to achieve high spatial resolution in sub-centimeter deep tissue phantoms named continuous-wave ultrasound-switchable fluorescence (CW-USF). The principle is to use a focused ultrasound wave to externally and locally switch on and off the fluorophore emission from a small volume (close to ultrasound focal volume). By making improvements in three aspects of this technique: excellent near-infrared USF contrast agents, a sensitive frequency-domain USF imaging system, and an effective signal processing algorithm, for the first time this study has achieved high spatial resolution (~ 900 μm) in 3-centimeter-deep tissue phantoms with high signal-to-noise ratio (SNR) and high sensitivity (3.4 picomoles of fluorophore in a volume of 68 nanoliters can be detected). We have achieved these results in both tissue-mimic phantoms and porcine muscle tissues. We have also demonstrated multi-color USF to image and distinguish two fluorophores with different wavelengths, which might be very useful for simultaneously imaging of multiple targets and observing their interactions in the future. This work has opened the door for future studies of high-resolution centimeter-deep tissue fluorescence imaging.

  18. Predicting perceptual quality of images in realistic scenario using deep filter banks

    Science.gov (United States)

    Zhang, Weixia; Yan, Jia; Hu, Shiyong; Ma, Yang; Deng, Dexiang

    2018-03-01

    Classical image perceptual quality assessment models usually resort to natural scene statistic methods, which are based on an assumption that certain reliable statistical regularities hold on undistorted images and will be corrupted by introduced distortions. However, these models usually fail to accurately predict degradation severity of images in realistic scenarios since complex, multiple, and interactive authentic distortions usually appear on them. We propose a quality prediction model based on convolutional neural network. Quality-aware features extracted from filter banks of multiple convolutional layers are aggregated into the image representation. Furthermore, an easy-to-implement and effective feature selection strategy is used to further refine the image representation and finally a linear support vector regression model is trained to map image representation into images' subjective perceptual quality scores. The experimental results on benchmark databases present the effectiveness and generalizability of the proposed model.

  19. Accurate and reproducible invasive breast cancer detection in whole-slide images: A Deep Learning approach for quantifying tumor extent

    Science.gov (United States)

    Cruz-Roa, Angel; Gilmore, Hannah; Basavanhally, Ajay; Feldman, Michael; Ganesan, Shridar; Shih, Natalie N. C.; Tomaszewski, John; González, Fabio A.; Madabhushi, Anant

    2017-04-01

    With the increasing ability to routinely and rapidly digitize whole slide images with slide scanners, there has been interest in developing computerized image analysis algorithms for automated detection of disease extent from digital pathology images. The manual identification of presence and extent of breast cancer by a pathologist is critical for patient management for tumor staging and assessing treatment response. However, this process is tedious and subject to inter- and intra-reader variability. For computerized methods to be useful as decision support tools, they need to be resilient to data acquired from different sources, different staining and cutting protocols and different scanners. The objective of this study was to evaluate the accuracy and robustness of a deep learning-based method to automatically identify the extent of invasive tumor on digitized images. Here, we present a new method that employs a convolutional neural network for detecting presence of invasive tumor on whole slide images. Our approach involves training the classifier on nearly 400 exemplars from multiple different sites, and scanners, and then independently validating on almost 200 cases from The Cancer Genome Atlas. Our approach yielded a Dice coefficient of 75.86%, a positive predictive value of 71.62% and a negative predictive value of 96.77% in terms of pixel-by-pixel evaluation compared to manually annotated regions of invasive ductal carcinoma.

  20. Cross-modal priming facilitates production of low imageability word strings in a case of deep-phonological dysphasia

    Directory of Open Access Journals (Sweden)

    Laura Mary Mccarthy

    2014-04-01

    Full Text Available Introduction. Characteristics of repetition in deep-phonological dysphasia include an inability to repeat nonwords, semantic errors in single word repetition (deep dysphasia and in multiple word repetition (phonological dysphasia and better repetition of highly imageable words (Wilshire & Fisher, 2004; Ablinger et al., 2008. Additionally, visual processing of words is often more accurate than auditory processing of words (Howard & Franklin, 1988. We report a case study of LT who incurred a LCVA on 10/3/2009. She initially presented with deep dysphasia and near normal word reading. When enrolled in this study, approximately 24 months post-onset, she presented with phonological dysphasia. We investigated the hypothesis that (1 reproduction of a word string would be more accurate when preceded by a visual presentation of the word string compared to two auditory presentations of the word string, and (2 that this facilitative boost would be observed only for strings of low image words, consistent with the imageability effect in repetition. Method. Three-word strings were created in four conditions which varied the frequency (F and imageability (I of words within a string: HiF-HiI, LoF-HiI, HiF-LoI, LoF-LoI. All strings were balanced for total syllable length and were unrelated semantically and phonologically. The dependent variable was as accuracy of repetition of each word within a string. We created six modality prime conditions each with 24 strings drawn equally from the four frequency-imageability types, randomized within modality condition: Auditory Once (AudOnce – string presented auditorily one time; Auditory Twice (AudAud – string presented auditorily two consecutive times; Visual Once (VisOnce – string presented visually one time; Visual Twice (VisVis – string presented visually two consecutive times; Auditory then Visual (AudVis – string presented once auditorily, then a second time visually; Visual then Auditory (VisAud

  1. In-111 platelet scintigraphy for detection of lower-extremity deep venous thrombophlebitis: Are 4-hour delayed images sufficient

    International Nuclear Information System (INIS)

    Seabold, J.E.; Conrad, G.R.; Ponto, J.A.; Kimball, D.A.; Frey, E.E.; Coughlan, J.D.; Ahmed, F.; Jensen, K.C.

    1986-01-01

    Twenty-one nonheparinized patients suspected of having lower-extremity deep venous thrombosis underwent 4- and 24-hour In-111-labeled platelet scintigraphy (PS) and lower-extremity contrast venography (CV). Eleven of the 21 patients (52%) had one or more intraluminal filling defects on CV, indicating active thrombophlebitis. In seven of these 11 patients (64%) In-PS was abnormal at 4 hours, and in ten (91%) at 24 hours. All patients with abnormal studies at 4 hours showed greater uptake of more abnormal sites at 24 hours. Of the ten patients with CV-negative studies, two had abnormal bilateral lower pelvis/upper thigh uptake in In-PS at 24 hours. These two In-PS studies were considered to be false positive. Twenty-four-hour In-PS images are necessary if 4-hour images show faint focal uptake of asymmetric blood pool activity, or are normal

  2. Successful deep brain stimulation surgery with intraoperative magnetic resonance imaging on a difficult neuroacanthocytosis case: case report.

    Science.gov (United States)

    Lim, Thien Thien; Fernandez, Hubert H; Cooper, Scott; Wilson, Kathryn Mary K; Machado, Andre G

    2013-07-01

    Chorea acanthocytosis is a progressive hereditary neurodegenerative disorder characterized by hyperkinetic movements, seizures, and acanthocytosis in the absence of any lipid abnormality. Medical treatment is typically limited and disappointing. We report on a 32-year-old patient with chorea acanthocytosis with a failed attempt at awake deep brain stimulation (DBS) surgery due to intraoperative seizures and postoperative intracranial hematoma. He then underwent a second DBS operation, but under general anesthesia and with intraoperative magnetic resonance imaging guidance. Marked improvement in his dystonia, chorea, and overall quality of life was noted 2 and 8 months postoperatively. DBS surgery of the bilateral globus pallidus pars interna may be useful in controlling the hyperkinetic movements in neuroacanthocytosis. Because of the high propensity for seizures in this disorder, DBS performed under general anesthesia, with intraoperative magnetic resonance imaging guidance, may allow successful implantation while maintaining accurate target localization.

  3. Mesoporous composite nanoparticles for dual-modality ultrasound/magnetic resonance imaging and synergistic chemo-/thermotherapy against deep tumors

    Directory of Open Access Journals (Sweden)

    Zhang N

    2017-10-01

    Full Text Available Nan Zhang,1 Ronghui Wang,2 Junnian Hao,1 Yang Yang,1 Hongmi Zou,3 Zhigang Wang1 1Chongqing Key Laboratory of Ultrasound Molecular Imaging, Second Affiliated Hospital of Chongqing Medical University, Chongqing Medical University, Chongqing, 2Department of Ultrasound, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 3Department of Ophthalmology, Second Affiliated Hospital of Chongqing Medical University, Chongqing, People’s Republic of China Abstract: High-intensity focused ultrasound (HIFU is a promising and noninvasive treatment for solid tumors, which has been explored for potential clinical applications. However, the clinical applications of HIFU for large and deep tumors such as hepatocellular carcinoma (HCC are severely limited by unsatisfactory imaging guidance, long therapeutic times, and damage to normal tissue around the tumor due to the high power applied. In this study, we developed doxorubicin/perfluorohexane-encapsulated hollow mesoporous Prussian blue nanoparticles (HMPBs-DOX/PFH as theranostic agents, which can effectively guide HIFU therapy and enhance its therapeutic effects in combination with chemotherapy, by decreasing the cavitation threshold. We investigated the effects of this agent on ultrasound and magnetic resonance imaging in vitro and in vivo. In addition, we showed a highly efficient HIFU therapeutic effect against HCC tumors, as well as controlled drug release, owing to the phase-transitional performance of the PFH. We therefore conclude that HMPB-DOX/PFH is a safe and efficient nanoplatform, which holds significant promise for cancer theranostics against deep tumors in clinical settings. Keywords: high-intensity focused ultrasound, HIFU, hollow mesoporous Prussian blue nanoplatforms, hepatocellular carcinoma, dual-modality imaging, synergistic chemo-/thermotherapy, theranostics

  4. Duplex imaging of residual venous obstruction to guide duration of therapy for lower extremity deep venous thrombosis.

    Science.gov (United States)

    Stephenson, Elliot J P; Liem, Timothy K

    2015-07-01

    Clinical trials have shown that the presence of ultrasound-identified residual venous obstruction (RVO) on follow-up scanning may be associated with an elevated risk for recurrence, thus providing a potential tool to help determine the optimal duration of anticoagulant therapy. We performed a systematic review to evaluate the clinical utility of post-treatment duplex imaging in predicting venous thromboembolism (VTE) recurrence and in adjusting duration of anticoagulation. The Ovid MEDLINE Database, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and Database of Abstracts of Reviews of Effects were queried for the terms residual thrombus or obstruction, duration of therapy, deep vein thrombosis, deep venous thrombosis, DVT, venous thromboembolism, VTE, antithrombotic therapy, and anticoagulation, and 228 studies were selected for review. Six studies determined the rate of VTE recurrence on the basis of the presence or absence of RVO. Findings on venous ultrasound scans frequently remained abnormal in 38% to 80% of patients, despite at least 3 months of therapeutic anticoagulation. In evaluating for VTE recurrence, the definition of RVO varied widely in the literature. Some studies have shown an association between RVO and VTE recurrence, whereas other studies have not. Overall, the presence of RVO is a mild risk factor for recurrence (odds ratio, 1.3-2.0), but only when surveillance imaging is performed soon after the index deep venous thrombosis (3 months). RVO is a mild risk factor for VTE recurrence. The presence or absence of ultrasound-identified RVO has a limited role in guiding the duration of therapeutic anticoagulation. Further research is needed to evaluate its utility relative to other known risk factors for VTE recurrence. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  5. DeepNet: An Ultrafast Neural Learning Code for Seismic Imaging

    International Nuclear Information System (INIS)

    Barhen, J.; Protopopescu, V.; Reister, D.

    1999-01-01

    A feed-forward multilayer neural net is trained to learn the correspondence between seismic data and well logs. The introduction of a virtual input layer, connected to the nominal input layer through a special nonlinear transfer function, enables ultrafast (single iteration), near-optimal training of the net using numerical algebraic techniques. A unique computer code, named DeepNet, has been developed, that has achieved, in actual field demonstrations, results unattainable to date with industry standard tools

  6. Defining probabilities of bowel resection in deep endometriosis of the rectum: Prediction with preoperative magnetic resonance imaging.

    Science.gov (United States)

    Perandini, Alessio; Perandini, Simone; Montemezzi, Stefania; Bonin, Cecilia; Bellini, Gaia; Bergamini, Valentino

    2018-02-01

    Deep endometriosis of the rectum is a highly challenging disease, and a surgical approach is often needed to restore anatomy and function. Two kinds of surgeries may be performed: radical with segmental bowel resection or conservative without resection. Most patients undergo magnetic resonance imaging (MRI) before surgery, but there is currently no method to predict if conservative surgery is feasible or whether bowel resection is required. The aim of this study was to create an algorithm that could predict bowel resection using MRI images, that was easy to apply and could be useful in a clinical setting, in order to adequately discuss informed consent with the patient and plan the an appropriate and efficient surgical session. We collected medical records from 2010 to 2016 and reviewed the MRI results of 52 patients to detect any parameters that could predict bowel resection. Parameters that were reproducible and with a significant correlation to radical surgery were investigated by statistical regression and combined in an algorithm to give the best prediction of resection. The calculation of two parameters in MRI, impact angle and lesion size, and their use in a mathematical algorithm permit us to predict bowel resection with a positive predictive value of 87% and a negative predictive value of 83%. MRI could be of value in predicting the need for bowel resection in deep endometriosis of the rectum. Further research is required to assess the possibility of a wider application of this algorithm outside our single-center study. © 2017 Japan Society of Obstetrics and Gynecology.

  7. Diagnosing upper extremity deep vein thrombosis with non-contrast-enhanced Magnetic Resonance Direct Thrombus Imaging: A pilot study.

    Science.gov (United States)

    Dronkers, C E A; Klok, F A; van Haren, G R; Gleditsch, J; Westerlund, E; Huisman, M V; Kroft, L J M

    2018-03-01

    Diagnosing upper extremity deep vein thrombosis (UEDVT) can be challenging. Compression ultrasonography is often inconclusive because of overlying anatomic structures that hamper compressing veins. Contrast venography is invasive and has a risk of contrast allergy. Magnetic Resonance Direct Thrombus Imaging (MRDTI) and Three Dimensional Turbo Spin-echo Spectral Attenuated Inversion Recovery (3D TSE-SPAIR) are both non-contrast-enhanced Magnetic Resonance Imaging (MRI) sequences that can visualize a thrombus directly by the visualization of methemoglobin, which is formed in a fresh blood clot. MRDTI has been proven to be accurate in diagnosing deep venous thrombosis (DVT) of the leg. The primary aim of this pilot study was to test the feasibility of diagnosing UEDVT with these MRI techniques. MRDTI and 3D TSE-SPAIR were performed in 3 pilot patients who were already diagnosed with UEDVT by ultrasonography or contrast venography. In all patients, UEDVT diagnosis could be confirmed by MRDTI and 3D TSE-SPAIR in all vein segments. In conclusion, this study showed that non-contrast MRDTI and 3D TSE-SPAIR sequences may be feasible tests to diagnose UEDVT. However diagnostic accuracy and management studies have to be performed before these techniques can be routinely used in clinical practice. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Residual Shuffling Convolutional Neural Networks for Deep Semantic Image Segmentation Using Multi-Modal Data

    Science.gov (United States)

    Chen, K.; Weinmann, M.; Gao, X.; Yan, M.; Hinz, S.; Jutzi, B.; Weinmann, M.

    2018-05-01

    In this paper, we address the deep semantic segmentation of aerial imagery based on multi-modal data. Given multi-modal data composed of true orthophotos and the corresponding Digital Surface Models (DSMs), we extract a variety of hand-crafted radiometric and geometric features which are provided separately and in different combinations as input to a modern deep learning framework. The latter is represented by a Residual Shuffling Convolutional Neural Network (RSCNN) combining the characteristics of a Residual Network with the advantages of atrous convolution and a shuffling operator to achieve a dense semantic labeling. Via performance evaluation on a benchmark dataset, we analyze the value of different feature sets for the semantic segmentation task. The derived results reveal that the use of radiometric features yields better classification results than the use of geometric features for the considered dataset. Furthermore, the consideration of data on both modalities leads to an improvement of the classification results. However, the derived results also indicate that the use of all defined features is less favorable than the use of selected features. Consequently, data representations derived via feature extraction and feature selection techniques still provide a gain if used as the basis for deep semantic segmentation.

  9. Multiscale deep neural network based analysis of FDG-PET images for the early diagnosis of Alzheimer's disease.

    Science.gov (United States)

    Lu, Donghuan; Popuri, Karteek; Ding, Gavin Weiguang; Balachandar, Rakesh; Beg, Mirza Faisal

    2018-05-01

    Alzheimer's disease (AD) is one of the most common neurodegenerative diseases with a commonly seen prodromal mild cognitive impairment (MCI) phase where memory loss is the main complaint progressively worsening with behavior issues and poor self-care. However, not all individuals clinically diagnosed with MCI progress to AD. A fraction of subjects with MCI either progress to non-AD dementia or remain stable at the MCI stage without progressing to dementia. Although a curative treatment of AD is currently unavailable, it is extremely important to correctly identify the individuals in the MCI phase that will go on to develop AD so that they may benefit from a curative treatment when one becomes available in the near future. At the same time, it would be highly desirable to also correctly identify those in the MCI phase that do not have AD pathology so they may be spared from unnecessary pharmocologic interventions that, at best, may provide them no benefit, and at worse, could further harm them with adverse side-effects. Additionally, it may be easier and simpler to identify the cause of the cognitive impairment in these non-AD cases, and hence proper identification of prodromal AD will be of benefit to these individuals as well. Fluorodeoxy glucose positron emission tomography (FDG-PET) captures the metabolic activity of the brain, and this imaging modality has been reported to identify changes related to AD prior to the onset of structural changes. Prior work on designing classifier using FDG-PET imaging has been promising. Since deep-learning has recently emerged as a powerful tool to mine features and use them for accurate labeling of the group membership of given images, we propose a novel deep-learning framework using FDG-PET metabolism imaging to identify subjects at the MCI stage with presymptomatic AD and discriminate them from other subjects with MCI (non-AD / non-progressive). Our multiscale deep neural network obtained 82.51% accuracy of classification

  10. Batch Image Encryption Using Generated Deep Features Based on Stacked Autoencoder Network

    Directory of Open Access Journals (Sweden)

    Fei Hu

    2017-01-01

    Full Text Available Chaos-based algorithms have been widely adopted to encrypt images. But previous chaos-based encryption schemes are not secure enough for batch image encryption, for images are usually encrypted using a single sequence. Once an encrypted image is cracked, all the others will be vulnerable. In this paper, we proposed a batch image encryption scheme into which a stacked autoencoder (SAE network was introduced to generate two chaotic matrices; then one set is used to produce a total shuffling matrix to shuffle the pixel positions on each plain image, and another produces a series of independent sequences of which each is used to confuse the relationship between the permutated image and the encrypted image. The scheme is efficient because of the advantages of parallel computing of SAE, which leads to a significant reduction in the run-time complexity; in addition, the hybrid application of shuffling and confusing enhances the encryption effect. To evaluate the efficiency of our scheme, we compared it with the prevalent “logistic map,” and outperformance was achieved in running time estimation. The experimental results and analysis show that our scheme has good encryption effect and is able to resist brute-force attack, statistical attack, and differential attack.

  11. Evaluation of deep neural networks for single image super-resolution in a maritime context

    NARCIS (Netherlands)

    Nieuwenhuizen, R.P.J.; Kruithof, M.; Schutte, K.

    2017-01-01

    High resolution imagery is of crucial importance for the performance on visual recognition tasks. Super-resolution (SR) reconstruction algorithms aim to enhance the image resolution beyond the capability of the image sensor being used. Traditional SR algorithms approach this inverse problem using

  12. Road Segmentation of Remotely-Sensed Images Using Deep Convolutional Neural Networks with Landscape Metrics and Conditional Random Fields

    Directory of Open Access Journals (Sweden)

    Teerapong Panboonyuen

    2017-07-01

    Full Text Available Object segmentation of remotely-sensed aerial (or very-high resolution, VHS images and satellite (or high-resolution, HR images, has been applied to many application domains, especially in road extraction in which the segmented objects are served as a mandatory layer in geospatial databases. Several attempts at applying the deep convolutional neural network (DCNN to extract roads from remote sensing images have been made; however, the accuracy is still limited. In this paper, we present an enhanced DCNN framework specifically tailored for road extraction of remote sensing images by applying landscape metrics (LMs and conditional random fields (CRFs. To improve the DCNN, a modern activation function called the exponential linear unit (ELU, is employed in our network, resulting in a higher number of, and yet more accurate, extracted roads. To further reduce falsely classified road objects, a solution based on an adoption of LMs is proposed. Finally, to sharpen the extracted roads, a CRF method is added to our framework. The experiments were conducted on Massachusetts road aerial imagery as well as the Thailand Earth Observation System (THEOS satellite imagery data sets. The results showed that our proposed framework outperformed Segnet, a state-of-the-art object segmentation technique, on any kinds of remote sensing imagery, in most of the cases in terms of precision, recall, and F 1 .

  13. An ensemble deep learning based approach for red lesion detection in fundus images.

    Science.gov (United States)

    Orlando, José Ignacio; Prokofyeva, Elena; Del Fresno, Mariana; Blaschko, Matthew B

    2018-01-01

    Diabetic retinopathy (DR) is one of the leading causes of preventable blindness in the world. Its earliest sign are red lesions, a general term that groups both microaneurysms (MAs) and hemorrhages (HEs). In daily clinical practice, these lesions are manually detected by physicians using fundus photographs. However, this task is tedious and time consuming, and requires an intensive effort due to the small size of the lesions and their lack of contrast. Computer-assisted diagnosis of DR based on red lesion detection is being actively explored due to its improvement effects both in clinicians consistency and accuracy. Moreover, it provides comprehensive feedback that is easy to assess by the physicians. Several methods for detecting red lesions have been proposed in the literature, most of them based on characterizing lesion candidates using hand crafted features, and classifying them into true or false positive detections. Deep learning based approaches, by contrast, are scarce in this domain due to the high expense of annotating the lesions manually. In this paper we propose a novel method for red lesion detection based on combining both deep learned and domain knowledge. Features learned by a convolutional neural network (CNN) are augmented by incorporating hand crafted features. Such ensemble vector of descriptors is used afterwards to identify true lesion candidates using a Random Forest classifier. We empirically observed that combining both sources of information significantly improve results with respect to using each approach separately. Furthermore, our method reported the highest performance on a per-lesion basis on DIARETDB1 and e-ophtha, and for screening and need for referral on MESSIDOR compared to a second human expert. Results highlight the fact that integrating manually engineered approaches with deep learned features is relevant to improve results when the networks are trained from lesion-level annotated data. An open source implementation of our

  14. Deep Fully Convolutional Networks for the Detection of Informal Settlements in VHR Images

    NARCIS (Netherlands)

    Persello, Claudio; Stein, Alfred

    2017-01-01

    This letter investigates fully convolutional networks (FCNs) for the detection of informal settlements in very high resolution (VHR) satellite images. Informal settlements or slums are proliferating in developing countries and their detection and classification provides vital information for

  15. From image captioning to video summary using deep recurrent networks and unsupervised segmentation

    Science.gov (United States)

    Morosanu, Bogdan-Andrei; Lemnaru, Camelia

    2018-04-01

    Automatic captioning systems based on recurrent neural networks have been tremendously successful at providing realistic natural language captions for complex and varied image data. We explore methods for adapting existing models trained on large image caption data sets to a similar problem, that of summarising videos using natural language descriptions and frame selection. These architectures create internal high level representations of the input image that can be used to define probability distributions and distance metrics on these distributions. Specifically, we interpret each hidden unit inside a layer of the caption model as representing the un-normalised log probability of some unknown image feature of interest for the caption generation process. We can then apply well understood statistical divergence measures to express the difference between images and create an unsupervised segmentation of video frames, classifying consecutive images of low divergence as belonging to the same context, and those of high divergence as belonging to different contexts. To provide a final summary of the video, we provide a group of selected frames and a text description accompanying them, allowing a user to perform a quick exploration of large unlabeled video databases.

  16. In vivo rat deep brain imaging using photoacoustic computed tomography (Conference Presentation)

    Science.gov (United States)

    Lin, Li; Li, Lei; Zhu, Liren; Hu, Peng; Wang, Lihong V.

    2017-03-01

    The brain has been likened to a great stretch of unknown territory consisting of a number of unexplored continents. Small animal brain imaging plays an important role charting that territory. By using 1064 nm illumination from the side, we imaged the full coronal depth of rat brains in vivo. The experiment was performed using a real-time full-ring-array photoacoustic computed tomography (PACT) imaging system, which achieved an imaging depth of 11 mm and a 100 μm radial resolution. Because of the fast imaging speed of the full-ring-array PACT system, no animal motion artifact was induced. The frame rate of the system was limited by the laser repetition rate (50 Hz). In addition to anatomical imaging of the blood vessels in the brain, we continuously monitored correlations between the two brain hemispheres in one of the coronal planes. The resting states in the coronal plane were measured before and after stroke ligation surgery at a neck artery.

  17. Deep Learning-Based Banknote Fitness Classification Using the Reflection Images by a Visible-Light One-Dimensional Line Image Sensor

    Directory of Open Access Journals (Sweden)

    Tuyen Danh Pham

    2018-02-01

    Full Text Available In automatic paper currency sorting, fitness classification is a technique that assesses the quality of banknotes to determine whether a banknote is suitable for recirculation or should be replaced. Studies on using visible-light reflection images of banknotes for evaluating their usability have been reported. However, most of them were conducted under the assumption that the denomination and input direction of the banknote are predetermined. In other words, a pre-classification of the type of input banknote is required. To address this problem, we proposed a deep learning-based fitness-classification method that recognizes the fitness level of a banknote regardless of the denomination and input direction of the banknote to the system, using the reflection images of banknotes by visible-light one-dimensional line image sensor and a convolutional neural network (CNN. Experimental results on the banknote image databases of the Korean won (KRW and the Indian rupee (INR with three fitness levels, and the Unites States dollar (USD with two fitness levels, showed that our method gives better classification accuracy than other methods.

  18. Deep Learning-Based Banknote Fitness Classification Using the Reflection Images by a Visible-Light One-Dimensional Line Image Sensor.

    Science.gov (United States)

    Pham, Tuyen Danh; Nguyen, Dat Tien; Kim, Wan; Park, Sung Ho; Park, Kang Ryoung

    2018-02-06

    In automatic paper currency sorting, fitness classification is a technique that assesses the quality of banknotes to determine whether a banknote is suitable for recirculation or should be replaced. Studies on using visible-light reflection images of banknotes for evaluating their usability have been reported. However, most of them were conducted under the assumption that the denomination and input direction of the banknote are predetermined. In other words, a pre-classification of the type of input banknote is required. To address this problem, we proposed a deep learning-based fitness-classification method that recognizes the fitness level of a banknote regardless of the denomination and input direction of the banknote to the system, using the reflection images of banknotes by visible-light one-dimensional line image sensor and a convolutional neural network (CNN). Experimental results on the banknote image databases of the Korean won (KRW) and the Indian rupee (INR) with three fitness levels, and the Unites States dollar (USD) with two fitness levels, showed that our method gives better classification accuracy than other methods.

  19. Deep tissue optical imaging of upconverting nanoparticles enabled by exploiting higher intrinsic quantum yield through use of millisecond single pulse excitation with high peak power

    DEFF Research Database (Denmark)

    Liu, Haichun; Xu, Can T.; Dumlupinar, Gökhan

    2013-01-01

    We have accomplished deep tissue optical imaging of upconverting nanoparticles at 800 nm, using millisecond single pulse excitation with high peak power. This is achieved by carefully choosing the pulse parameters, derived from time-resolved rate-equation analysis, which result in higher intrinsic...... quantum yield that is utilized by upconverting nanoparticles for generating this near infrared upconversion emission. The pulsed excitation approach thus promises previously unreachable imaging depths and shorter data acquisition times compared with continuous wave excitation, while simultaneously keeping...... therapy and remote activation of biomolecules in deep tissues....

  20. Deep convolutional neural network and 3D deformable approach for tissue segmentation in musculoskeletal magnetic resonance imaging.

    Science.gov (United States)

    Liu, Fang; Zhou, Zhaoye; Jang, Hyungseok; Samsonov, Alexey; Zhao, Gengyan; Kijowski, Richard

    2018-04-01

    To describe and evaluate a new fully automated musculoskeletal tissue segmentation method using deep convolutional neural network (CNN) and three-dimensional (3D) simplex deformable modeling to improve the accuracy and efficiency of cartilage and bone segmentation within the knee joint. A fully automated segmentation pipeline was built by combining a semantic segmentation CNN and 3D simplex deformable modeling. A CNN technique called SegNet was applied as the core of the segmentation method to perform high resolution pixel-wise multi-class tissue classification. The 3D simplex deformable modeling refined the output from SegNet to preserve the overall shape and maintain a desirable smooth surface for musculoskeletal structure. The fully automated segmentation method was tested using a publicly available knee image data set to compare with currently used state-of-the-art segmentation methods. The fully automated method was also evaluated on two different data sets, which include morphological and quantitative MR images with different tissue contrasts. The proposed fully automated segmentation method provided good segmentation performance with segmentation accuracy superior to most of state-of-the-art methods in the publicly available knee image data set. The method also demonstrated versatile segmentation performance on both morphological and quantitative musculoskeletal MR images with different tissue contrasts and spatial resolutions. The study demonstrates that the combined CNN and 3D deformable modeling approach is useful for performing rapid and accurate cartilage and bone segmentation within the knee joint. The CNN has promising potential applications in musculoskeletal imaging. Magn Reson Med 79:2379-2391, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  1. Bioluminescence resonance energy transfer (BRET) imaging of protein–protein interactions within deep tissues of living subjects

    Science.gov (United States)

    Dragulescu-Andrasi, Anca; Chan, Carmel T.; Massoud, Tarik F.; Gambhir, Sanjiv S.

    2011-01-01

    Identifying protein–protein interactions (PPIs) is essential for understanding various disease mechanisms and developing new therapeutic approaches. Current methods for assaying cellular intermolecular interactions are mainly used for cells in culture and have limited use for the noninvasive assessment of small animal disease models. Here, we describe red light-emitting reporter systems based on bioluminescence resonance energy transfer (BRET) that allow for assaying PPIs both in cell culture and deep tissues of small animals. These BRET systems consist of the recently developed Renilla reniformis luciferase (RLuc) variants RLuc8 and RLuc8.6, used as BRET donors, combined with two red fluorescent proteins, TagRFP and TurboFP635, as BRET acceptors. In addition to the native coelenterazine luciferase substrate, we used the synthetic derivative coelenterazine-v, which further red-shifts the emission maxima of Renilla luciferases by 35 nm. We show the use of these BRET systems for ratiometric imaging of both cells in culture and deep-tissue small animal tumor models and validate their applicability for studying PPIs in mice in the context of rapamycin-induced FK506 binding protein 12 (FKBP12)-FKBP12 rapamycin binding domain (FRB) association. These red light-emitting BRET systems have great potential for investigating PPIs in the context of drug screening and target validation applications. PMID:21730157

  2. Electroporation-based treatment planning for deep-seated tumors based on automatic liver segmentation of MRI images.

    Science.gov (United States)

    Pavliha, Denis; Mušič, Maja M; Serša, Gregor; Miklavčič, Damijan

    2013-01-01

    Electroporation is the phenomenon that occurs when a cell is exposed to a high electric field, which causes transient cell membrane permeabilization. A paramount electroporation-based application is electrochemotherapy, which is performed by delivering high-voltage electric pulses that enable the chemotherapeutic drug to more effectively destroy the tumor cells. Electrochemotherapy can be used for treating deep-seated metastases (e.g. in the liver, bone, brain, soft tissue) using variable-geometry long-needle electrodes. To treat deep-seated tumors, patient-specific treatment planning of the electroporation-based treatment is required. Treatment planning is based on generating a 3D model of the organ and target tissue subject to electroporation (i.e. tumor nodules). The generation of the 3D model is done by segmentation algorithms. We implemented and evaluated three automatic liver segmentation algorithms: region growing, adaptive threshold, and active contours (snakes). The algorithms were optimized using a seven-case dataset manually segmented by the radiologist as a training set, and finally validated using an additional four-case dataset that was previously not included in the optimization dataset. The presented results demonstrate that patient's medical images that were not included in the training set can be successfully segmented using our three algorithms. Besides electroporation-based treatments, these algorithms can be used in applications where automatic liver segmentation is required.

  3. The Sloan Digital Sky Survey COADD: 275 deg2 of deep Sloan Digital Sky Survey imaging on stripe 82

    International Nuclear Information System (INIS)

    Annis, James; Soares-Santos, Marcelle; Dodelson, Scott; Hao, Jiangang; Jester, Sebastian; Johnston, David E.; Kubo, Jeffrey M.; Lampeitl, Hubert; Lin, Huan; Miknaitis, Gajus; Yanny, Brian; Strauss, Michael A.; Gunn, James E.; Lupton, Robert H.; Becker, Andrew C.; Ivezić, Željko; Fan, Xiaohui; Jiang, Linhua; Seo, Hee-Jong; Simet, Melanie

    2014-01-01

    We present details of the construction and characterization of the coaddition of the Sloan Digital Sky Survey (SDSS) Stripe 82 ugriz imaging data. This survey consists of 275 deg 2 of repeated scanning by the SDSS camera over –50° ≤ α ≤ 60° and –1.°25 ≤ δ ≤ +1.°25 centered on the Celestial Equator. Each piece of sky has ∼20 runs contributing and thus reaches ∼2 mag fainter than the SDSS single pass data, i.e., to r ∼ 23.5 for galaxies. We discuss the image processing of the coaddition, the modeling of the point-spread function (PSF), the calibration, and the production of standard SDSS catalogs. The data have an r-band median seeing of 1.''1 and are calibrated to ≤1%. Star color-color, number counts, and PSF size versus modeled size plots show that the modeling of the PSF is good enough for precision five-band photometry. Structure in the PSF model versus magnitude plot indicates minor PSF modeling errors, leading to misclassification of stars as galaxies, as verified using VVDS spectroscopy. There are a variety of uses for this wide-angle deep imaging data, including galactic structure, photometric redshift computation, cluster finding and cross wavelength measurements, weak lensing cluster mass calibrations, and cosmic shear measurements.

  4. Classification of C2C12 cells at differentiation by convolutional neural network of deep learning using phase contrast images.

    Science.gov (United States)

    Niioka, Hirohiko; Asatani, Satoshi; Yoshimura, Aina; Ohigashi, Hironori; Tagawa, Seiichi; Miyake, Jun

    2018-01-01

    In the field of regenerative medicine, tremendous numbers of cells are necessary for tissue/organ regeneration. Today automatic cell-culturing system has been developed. The next step is constructing a non-invasive method to monitor the conditions of cells automatically. As an image analysis method, convolutional neural network (CNN), one of the deep learning method, is approaching human recognition level. We constructed and applied the CNN algorithm for automatic cellular differentiation recognition of myogenic C2C12 cell line. Phase-contrast images of cultured C2C12 are prepared as input dataset. In differentiation process from myoblasts to myotubes, cellular morphology changes from round shape to elongated tubular shape due to fusion of the cells. CNN abstract the features of the shape of the cells and classify the cells depending on the culturing days from when differentiation is induced. Changes in cellular shape depending on the number of days of culture (Day 0, Day 3, Day 6) are classified with 91.3% accuracy. Image analysis with CNN has a potential to realize regenerative medicine industry.

  5. Learning Traffic as Images: A Deep Convolutional Neural Network for Large-Scale Transportation Network Speed Prediction.

    Science.gov (United States)

    Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng

    2017-04-10

    This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.

  6. Imaging-based enrichment criteria using deep learning algorithms for efficient clinical trials in mild cognitive impairment.

    Science.gov (United States)

    Ithapu, Vamsi K; Singh, Vikas; Okonkwo, Ozioma C; Chappell, Richard J; Dowling, N Maritza; Johnson, Sterling C

    2015-12-01

    The mild cognitive impairment (MCI) stage of Alzheimer's disease (AD) may be optimal for clinical trials to test potential treatments for preventing or delaying decline to dementia. However, MCI is heterogeneous in that not all cases progress to dementia within the time frame of a trial and some may not have underlying AD pathology. Identifying those MCIs who are most likely to decline during a trial and thus most likely to benefit from treatment will improve trial efficiency and power to detect treatment effects. To this end, using multimodal, imaging-derived, inclusion criteria may be especially beneficial. Here, we present a novel multimodal imaging marker that predicts future cognitive and neural decline from [F-18]fluorodeoxyglucose positron emission tomography (PET), amyloid florbetapir PET, and structural magnetic resonance imaging, based on a new deep learning algorithm (randomized denoising autoencoder marker, rDAm). Using ADNI2 MCI data, we show that using rDAm as a trial enrichment criterion reduces the required sample estimates by at least five times compared with the no-enrichment regime and leads to smaller trials with high statistical power, compared with existing methods. Copyright © 2015 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  7. Nonlinear analysis and synthesis of video images using deep dynamic bottleneck neural networks for face recognition.

    Science.gov (United States)

    Moghadam, Saeed Montazeri; Seyyedsalehi, Seyyed Ali

    2018-05-31

    Nonlinear components extracted from deep structures of bottleneck neural networks exhibit a great ability to express input space in a low-dimensional manifold. Sharing and combining the components boost the capability of the neural networks to synthesize and interpolate new and imaginary data. This synthesis is possibly a simple model of imaginations in human brain where the components are expressed in a nonlinear low dimensional manifold. The current paper introduces a novel Dynamic Deep Bottleneck Neural Network to analyze and extract three main features of videos regarding the expression of emotions on the face. These main features are identity, emotion and expression intensity that are laid in three different sub-manifolds of one nonlinear general manifold. The proposed model enjoying the advantages of recurrent networks was used to analyze the sequence and dynamics of information in videos. It is noteworthy to mention that this model also has also the potential to synthesize new videos showing variations of one specific emotion on the face of unknown subjects. Experiments on discrimination and recognition ability of extracted components showed that the proposed model has an average of 97.77% accuracy in recognition of six prominent emotions (Fear, Surprise, Sadness, Anger, Disgust, and Happiness), and 78.17% accuracy in the recognition of intensity. The produced videos revealed variations from neutral to the apex of an emotion on the face of the unfamiliar test subject which is on average 0.8 similar to reference videos in the scale of the SSIM method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Comparison of different deep learning approaches for parotid gland segmentation from CT images

    Science.gov (United States)

    Hänsch, Annika; Schwier, Michael; Gass, Tobias; Morgas, Tomasz; Haas, Benjamin; Klein, Jan; Hahn, Horst K.

    2018-02-01

    The segmentation of target structures and organs at risk is a crucial and very time-consuming step in radiotherapy planning. Good automatic methods can significantly reduce the time clinicians have to spend on this task. Due to its variability in shape and often low contrast to surrounding structures, segmentation of the parotid gland is especially challenging. Motivated by the recent success of deep learning, we study different deep learning approaches for parotid gland segmentation. Particularly, we compare 2D, 2D ensemble and 3D U-Net approaches and find that the 2D U-Net ensemble yields the best results with a mean Dice score of 0.817 on our test data. The ensemble approach reduces false positives without the need for an automatic region of interest detection. We also apply our trained 2D U-Net ensemble to segment the test data of the 2015 MICCAI head and neck auto-segmentation challenge. With a mean Dice score of 0.861, our classifier exceeds the highest mean score in the challenge. This shows that the method generalizes well onto data from independent sites. Since appropriate reference annotations are essential for training but often difficult and expensive to obtain, it is important to know how many samples are needed to properly train a neural network. We evaluate the classifier performance after training with differently sized training sets (50-450) and find that 250 cases (without using extensive data augmentation) are sufficient to obtain good results with the 2D ensemble. Adding more samples does not significantly improve the Dice score of the segmentations.

  9. Deep and optically resolved imaging through scattering media by space-reversed propagation.

    Science.gov (United States)

    Glastre, W; Jacquin, O; Hugon, O; Guillet de Chatellus, H; Lacot, E

    2012-12-01

    We propose a novel technique of microscopy to overcome the effects of both scattering and limitation of the accessible depth due to the objective working distance. By combining laser optical feedback imaging with acoustic photon tagging and synthetic aperture refocusing we demonstrate an ultimate shot noise sensitivity at low power (required to preserve the tissues) and a high resolution beyond the microscope working distance. More precisely, with a laser power of 10 mW, we obtain images with a micrometric resolution over approximately eight transport mean free paths, corresponding to 1.3 times the microscope working distance. Various applications such as biomedical diagnosis and research and development of new drugs and therapies can benefit from our imaging setup.

  10. AUTOMATED DETECTION OF MITOTIC FIGURES IN BREAST CANCER HISTOPATHOLOGY IMAGES USING GABOR FEATURES AND DEEP NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Maqlin Paramanandam

    2016-11-01

    Full Text Available The count of mitotic figures in Breast cancer histopathology slides is the most significant independent prognostic factor enabling determination of the proliferative activity of the tumor. In spite of the strict protocols followed, the mitotic counting activity suffers from subjectivity and considerable amount of observer variability despite being a laborious task. Interest in automated detection of mitotic figures has been rekindled with the advent of Whole Slide Scanners. Subsequently mitotic detection grand challenge contests have been held in recent years and several research methodologies developed by their participants. This paper proposes an efficient mitotic detection methodology for Hematoxylin and Eosin stained Breast cancer Histopathology Images using Gabor features and a Deep Belief Network- Deep Neural Network architecture (DBN-DNN. The proposed method has been evaluated on breast histopathology images from the publicly available dataset from MITOS contest held at the ICPR 2012 conference. It contains 226 mitoses annotated on 35 HPFs by several pathologists and 15 testing HPFs, yielding an F-measure of 0.74. In addition the said methodology was also tested on 3 slides from the MITOSIS- ATYPIA grand challenge held at the ICPR 2014 conference, an extension of MITOS containing 749 mitoses annotated on 1200 HPFs, by pathologists worldwide. This study has employed 3 slides (294 HPFs from the MITOS-ATYPIA training dataset in its evaluation and the results showed F-measures 0.65, 0.72and 0.74 for each slide. The proposed method is fast and computationally simple yet its accuracy and specificity is comparable to the best winning methods of the aforementioned grand challenges

  11. The stellar content of the halo of NGC 5907 from deep Hubble Space Telescope NICMOS imaging

    NARCIS (Netherlands)

    Zepf, SE; Liu, MC; Marleau, FR; Sackett, PD; Graham, [No Value

    We present H-band images obtained with the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) of a field 75 " (5 kpc) above the plane of the disk of the edge-on spiral galaxy NGC 5907. Ground-based observations have shown that NGC 5907 has a luminous halo with a shallow radial profile

  12. Image analysis of seafloor photographs for estimation of deep-sea minerals

    Digital Repository Service at National Institute of Oceanography (India)

    Sharma, R.; Jaisankar, S.; Samanta, S.; Sardar, A.A.; Gracias, D.G.

    of these minerals, necessitating the involvement of a user input. A method has been developed whereby spectral signatures of different features are identified using a software ‘trained’ by a user, and the images are digitized for coverage estimation of nodules...

  13. A Noninvasive Imaging Approach to Understanding Speech Changes following Deep Brain Stimulation in Parkinson's Disease

    Science.gov (United States)

    Narayana, Shalini; Jacks, Adam; Robin, Donald A.; Poizner, Howard; Zhang, Wei; Franklin, Crystal; Liotti, Mario; Vogel, Deanie; Fox, Peter T.

    2009-01-01

    Purpose: To explore the use of noninvasive functional imaging and "virtual" lesion techniques to study the neural mechanisms underlying motor speech disorders in Parkinson's disease. Here, we report the use of positron emission tomography (PET) and transcranial magnetic stimulation (TMS) to explain exacerbated speech impairment following…

  14. Zero-Echo-Time and Dixon Deep Pseudo-CT (ZeDD CT): Direct Generation of Pseudo-CT Images for Pelvic PET/MRI Attenuation Correction Using Deep Convolutional Neural Networks with Multiparametric MRI.

    Science.gov (United States)

    Leynes, Andrew P; Yang, Jaewon; Wiesinger, Florian; Kaushik, Sandeep S; Shanbhag, Dattesh D; Seo, Youngho; Hope, Thomas A; Larson, Peder E Z

    2018-05-01

    Accurate quantification of uptake on PET images depends on accurate attenuation correction in reconstruction. Current MR-based attenuation correction methods for body PET use a fat and water map derived from a 2-echo Dixon MRI sequence in which bone is neglected. Ultrashort-echo-time or zero-echo-time (ZTE) pulse sequences can capture bone information. We propose the use of patient-specific multiparametric MRI consisting of Dixon MRI and proton-density-weighted ZTE MRI to directly synthesize pseudo-CT images with a deep learning model: we call this method ZTE and Dixon deep pseudo-CT (ZeDD CT). Methods: Twenty-six patients were scanned using an integrated 3-T time-of-flight PET/MRI system. Helical CT images of the patients were acquired separately. A deep convolutional neural network was trained to transform ZTE and Dixon MR images into pseudo-CT images. Ten patients were used for model training, and 16 patients were used for evaluation. Bone and soft-tissue lesions were identified, and the SUV max was measured. The root-mean-squared error (RMSE) was used to compare the MR-based attenuation correction with the ground-truth CT attenuation correction. Results: In total, 30 bone lesions and 60 soft-tissue lesions were evaluated. The RMSE in PET quantification was reduced by a factor of 4 for bone lesions (10.24% for Dixon PET and 2.68% for ZeDD PET) and by a factor of 1.5 for soft-tissue lesions (6.24% for Dixon PET and 4.07% for ZeDD PET). Conclusion: ZeDD CT produces natural-looking and quantitatively accurate pseudo-CT images and reduces error in pelvic PET/MRI attenuation correction compared with standard methods. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.

  15. Research into the effects of seawater velocity variation on migration imaging in deep-water geology

    Directory of Open Access Journals (Sweden)

    Hui Sun

    2016-07-01

    Full Text Available This paper aims at the problem that in deep water the migration quality is poor, and starts with the influence that velocity model accuracy has on migration, studying influence that variable seawater velocity makes on migration effect. At first, variable seawater velocity influenced by temperature, pressure and salinity is defined to replace the true seawater velocity. Then variable seawater velocity’s influence on interface migration location, layer sickness and migration energy focusing degree are analyzed in theory. And finally a deep water layered medium model containing variable seawater velocity, a syncline wedge shape model and a complex seafloor velocity model are constructed. By changing the seawater velocity of each model and comparing migration results of constant seawater-velocity model and variable seawater-velocity model, we can draw the conclusion: Under the condition of deep water, variable seawater-velocity’s impact on the quality of seismic migration is significant, which not only can change the location of geologic body migration result, but also can influence the resolution of geologic interface in the migration section and maybe can cause migration illusion.   Investigación de los efectos de la variación en la velocidad del agua marina sobre las imágenes de migración en la geología de aguas profundas Resumen Este artículo se enfoca en el problema de la baja calidad de la migración en aguas profundas. Se analiza la influencia que tiene el modelo de precisión de velocidad en la migración y se estudia el impacto que la variación de velocidad del agua marina tiene en el efecto de movimiento. En primera instancia, se define la variación de la velocidad del agua marina afectada por la temperatura, la presión y la salinidad para reemplazar la velocidad del agua marina actual. Luego se analiza la teoría de la influencia de la velocidad del agua marina sobre la interfaz de la ubicación de migración, el grosor de

  16. Image-guided preoperative prediction of pyramidal tract side effect in deep brain stimulation

    Science.gov (United States)

    Baumgarten, C.; Zhao, Y.; Sauleau, P.; Malrain, C.; Jannin, P.; Haegelen, C.

    2016-03-01

    Deep brain stimulation of the medial globus pallidus is a surgical procedure for treating patients suffering from Parkinson's disease. Its therapeutic effect may be limited by the presence of pyramidal tract side effect (PTSE). PTSE is a contraction time-locked to the stimulation when the current spreading reaches the motor fibers of the pyramidal tract within the internal capsule. The lack of side-effect predictive model leads the neurologist to secure an optimal electrode placement by iterating clinical testing on an awake patient during the surgical procedure. The objective of the study was to propose a preoperative predictive model of PTSE. A machine learning based method called PyMAN (for Pyramidal tract side effect Model based on Artificial Neural network) that accounted for the current of the stimulation, the 3D electrode coordinates and the angle of the trajectory, was designed to predict the occurrence of PTSE. Ten patients implanted in the medial globus pallidus have been tested by a clinician to create a labeled dataset of the stimulation parameters that trigger PTSE. The kappa index value between the data predicted by PyMAN and the labeled data was .78. Further evaluation studies are desirable to confirm whether PyMAN could be a reliable tool for assisting the surgeon to prevent PTSE during the preoperative planning.

  17. Star formation history of the Galactic bulge from deep HST imaging of low reddening windows

    Science.gov (United States)

    Bernard, Edouard J.; Schultheis, Mathias; Di Matteo, Paola; Hill, Vanessa; Haywood, Misha; Calamida, Annalisa

    2018-04-01

    Despite the huge amount of photometric and spectroscopic efforts targetting the Galactic bulge over the past few years, its age distribution remains controversial owing to both the complexity of determining the age of individual stars and the difficult observing conditions. Taking advantage of the recent release of very deep, proper-motion-cleaned colour-magnitude diagrams (CMDs) of four low reddening windows obtained with the Hubble Space Telescope (HST), we used the CMD-fitting technique to calculate the star formation history (SFH) of the bulge at -2° > b > -4° along the minor axis. We find that over 80 percent of the stars formed before 8 Gyr ago, but that a significant fraction of the super-solar metallicity stars are younger than this age. Considering only the stars that are within reach of the current generation of spectrographs (i.e. V≲ 21), we find that 10 percent of the bulge stars are younger than 5 Gyr, while this fraction rises to 20-25 percent in the metal-rich peak. The age-metallicity relation is well parametrized by a linear fit implying an enrichment rate of dZ/dt ˜ 0.005 Gyr-1. Our metallicity distribution function accurately reproduces that observed by several spectroscopic surveys of Baade's window, with the bulk of stars having metal-content in the range [Fe/H]˜-0.7 to ˜0.6, along with a sparse tail to much lower metallicities.

  18. Beauty is only photoshop deep: legislating models' BMIs and photoshopping images.

    Science.gov (United States)

    Krawitz, Marilyn

    2014-06-01

    Many women struggle with poor body image and eating disorders due, in part, to images of very thin women and photoshopped bodies in the media and advertisements. In 2013, Israel's Act Limiting Weight in the Modelling Industry, 5772-2012, came into effect. Known as the Photoshop Law, it requires all models in Israel who are over 18 years old to have a body mass index of 18.5 or higher. The Israeli government was the first government in the world to legislate on this issue. Australia has a voluntary Code of Conduct that is similar to the Photoshop Law. This article argues that the Australian government should follow Israel's lead and pass a law similar to the Photoshop Law because the Code is not sufficiently binding.

  19. Deep pelvic endometriosis: Limited additional diagnostic value of postcontrast in comparison with conventional MR images

    International Nuclear Information System (INIS)

    Bazot, Marc; Gasner, Adeline; Lafont, Clarisse; Ballester, Marcos; Daraï, Emile

    2011-01-01

    Objectives: To determine the value of postcontrast MR imaging (MRI) in cases of suspected pelvic endometriosis by assessing interobserver variability of MR imaging according to the endometriotic locations. Methods: This retrospective study included 158 patients with clinical suspicion of endometriosis who had undergone surgery after MRI between January 2004 and April 2009. Three radiologists with different degrees of experience were independently asked to determine the presence of rectosigmoid colon, vaginal, and bladder endometriosis using both conventional and a combination of conventional and postcontrast MRI. Descriptive analysis, ROC analysis and interobserver agreements (kappa values) were calculated. Results: Rectosigmoid colon, vaginal, and bladder endometriosis was present in 65, 39 and eight patients, respectively. The accuracy of conventional assessment for readers 1, 2, and 3 for rectosigmoid colon, vaginal and bladder endometriosis was 77.2%, 74.1% and 96.8%, and 73.4%, 76.6% and 98.7%, and 86.1%, 88.6% and 99.4%, respectively. The accuracy of conventional and postcontrast MR images for readers 1, 2, and 3 for rectosigmoid colon, vaginal and bladder endometriosis was 77.8%, 78.5% and 98.1%, and 83.5%, 83.5% and 99.4%, and 87.3%, 89.2% and 99.4%, respectively. Conclusions: Interobserver variability of MRI using conventional MRI alone is excellent for the diagnosis of DPE. No significant benefit of intravenous gadolinium, rectal or vaginal administration has been demonstrated.

  20. Photometric redshift estimation via deep learning. Generalized and pre-classification-less, image based, fully probabilistic redshifts

    Science.gov (United States)

    D'Isanto, A.; Polsterer, K. L.

    2018-01-01

    Context. The need to analyze the available large synoptic multi-band surveys drives the development of new data-analysis methods. Photometric redshift estimation is one field of application where such new methods improved the results, substantially. Up to now, the vast majority of applied redshift estimation methods have utilized photometric features. Aims: We aim to develop a method to derive probabilistic photometric redshift directly from multi-band imaging data, rendering pre-classification of objects and feature extraction obsolete. Methods: A modified version of a deep convolutional network was combined with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) were applied as performance criteria. We have adopted a feature based random forest and a plain mixture density network to compare performances on experiments with data from SDSS (DR9). Results: We show that the proposed method is able to predict redshift PDFs independently from the type of source, for example galaxies, quasars or stars. Thereby the prediction performance is better than both presented reference methods and is comparable to results from the literature. Conclusions: The presented method is extremely general and allows us to solve of any kind of probabilistic regression problems based on imaging data, for example estimating metallicity or star formation rate of galaxies. This kind of methodology is tremendously important for the next generation of surveys.

  1. CANDELS : THE COSMIC ASSEMBLY NEAR-INFRARED DEEP EXTRAGALACTIC LEGACY SURVEY-THE HUBBLE SPACE TELESCOPE OBSERVATIONS, IMAGING DATA PRODUCTS, AND MOSAICS

    NARCIS (Netherlands)

    Koekemoer, Anton M.; Faber, S. M.; Ferguson, Henry C.; Grogin, Norman A.; Kocevski, Dale D.; Koo, David C.; Lai, Kamson; Lotz, Jennifer M.; Lucas, Ray A.; McGrath, Elizabeth J.; Ogaz, Sara; Rajan, Abhijith; Riess, Adam G.; Rodney, Steve A.; Strolger, Louis; Casertano, Stefano; Castellano, Marco; Dahlen, Tomas; Dickinson, Mark; Dolch, Timothy; Fontana, Adriano; Giavalisco, Mauro; Grazian, Andrea; Guo, Yicheng; Hathi, Nimish P.; Huang, Kuang-Han; van der Wel, Arjen; Yan, Hao-Jing; Acquaviva, Viviana; Alexander, David M.; Almaini, Omar; Ashby, Matthew L. N.; Barden, Marco; Bell, Eric F.; Bournaud, Frederic; Brown, Thomas M.; Caputi, Karina I.; Cassata, Paolo; Challis, Peter J.; Chary, Ranga-Ram; Cheung, Edmond; Cirasuolo, Michele; Conselice, Christopher J.; Cooray, Asantha Roshan; Croton, Darren J.; Daddi, Emanuele; Dave, Romeel; de Mello, Duilia F.; de Ravel, Loic; Dekel, Avishai; Donley, Jennifer L.; Dunlop, James S.; Dutton, Aaron A.; Elbaz, David; Fazio, Giovanni G.; Filippenko, Alexei V.; Finkelstein, Steven L.; Frazer, Chris; Gardner, Jonathan P.; Garnavich, Peter M.; Gawiser, Eric; Gruetzbauch, Ruth; Hartley, Will G.; Haeussler, Boris; Herrington, Jessica; Hopkins, Philip F.; Huang, Jia-Sheng; Jha, Saurabh W.; Johnson, Andrew; Kartaltepe, Jeyhan S.; Khostovan, Ali A.; Kirshner, Robert P.; Lani, Caterina; Lee, Kyoung-Soo; Li, Weidong; Madau, Piero; McCarthy, Patrick J.; McIntosh, Daniel H.; McLure, Ross J.; McPartland, Conor; Mobasher, Bahram; Moreira, Heidi; Mortlock, Alice; Moustakas, Leonidas A.; Mozena, Mark; Nandra, Kirpal; Newman, Jeffrey A.; Nielsen, Jennifer L.; Niemi, Sami; Noeske, Kai G.; Papovich, Casey J.; Pentericci, Laura; Pope, Alexandra; Primack, Joel R.; Ravindranath, Swara; Reddy, Naveen A.; Renzini, Alvio; Rix, Hans-Walter; Robaina, Aday R.; Rosario, David J.; Rosati, Piero; Salimbeni, Sara; Scarlata, Claudia; Siana, Brian; Simard, Luc; Smidt, Joseph; Snyder, Diana; Somerville, Rachel S.; Spinrad, Hyron; Straughn, Amber N.; Telford, Olivia; Teplitz, Harry I.; Trump, Jonathan R.; Vargas, Carlos; Villforth, Carolin; Wagner, Cory R.; Wandro, Pat; Wechsler, Risa H.; Weiner, Benjamin J.; Wiklind, Tommy; Wild, Vivienne; Wilson, Grant; Wuyts, Stijn; Yun, Min S.

    2011-01-01

    This paper describes the Hubble Space Telescope imaging data products and data reduction procedures for the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS). This survey is designed to document the evolution of galaxies and black holes at z approximate to 1.5-8, and to study

  2. Deep feature representation with stacked sparse auto-encoder and convolutional neural network for hyperspectral imaging-based detection of cucumber defects

    Science.gov (United States)

    It is challenging to achieve rapid and accurate processing of large amounts of hyperspectral image data. This research was aimed to develop a novel classification method by employing deep feature representation with the stacked sparse auto-encoder (SSAE) and the SSAE combined with convolutional neur...

  3. High-resolution and Deep Crustal Imaging Across The North Sicily Continental Margin (southern Tyrrhenian Sea)

    Science.gov (United States)

    Agate, M.; Bertotti, G.; Catalano, R.; Pepe, F.; Sulli, A.

    Three multichannel seismic reflection profiles across the North Sicily continental mar- gin have been reprocessed and interpreted. Data consist of an unpublished high pene- tration seismic profile (deep crust Italian CROP Project) and a high-resolution seismic line. These lines run in the NNE-SSW direction, from the Sicilian continental shelf to the Tyrrhenian abyssal plain (Marsili area), and are tied by a third, high penetration seismic line MS104 crossing the Sisifo High. The North Sicily continental margin represents the inner sector of the Sicilian-Maghrebian chain that is collapsed as con- sequence of extensional tectonics. The chain is formed by a tectonic wedge (12-15 km thick. It includes basinal Meso-Cenozoic carbonate units overthrusting carbonate platform rock units (Catalano et al., 2000). Presently, main culmination (e.g. Monte Solunto) and a number of tectonic depressions (e.g. Cefalù basin), filled by >1000 m thick Plio-Pleistocene sedimentary wedge, are observed along the investigated tran- sect. Seismic attributes and reflector pattern depicts a complex crustal structure. Be- tween the coast and the M. Solunto high, a transparent to diffractive band (assigned to the upper crust) is recognised above low frequency reflective layers (occurring be- tween 9 and 11 s/TWT) that dips towards the North. Their bottom can be correlated to the seismological (African?) Moho discontinuity which is (26 km deep in the Sicilian shelf (Scarascia et al., 1994). Beneath the Monte Solunto ridge, strongly deformed re- flectors occurring between 8 to 9.5 s/TWT (European lower crust?) overly the African (?) lower crust. The resulting geometry suggests underplating of the African crust respect to the European crust (?). The already deformed crustal edifice is dissected by a number of N-dipping normal faults that open extensional basins and are associ- ated with crustal thinning. The Plio-Pleistocene fill of the Cefalù basin can be subdi- vided into three subunits by

  4. Evaluation of agreement between transvaginal ultrasonography and magnetic resonance imaging of the pelvis in deep endometriosis with emphasis on intestinal involvement

    International Nuclear Information System (INIS)

    Cardoso, Maene Marcondes; Coutinho Junior, Antonio Carlos; Domingues, Marisa Nassar Aidar; Werner Junior, Heron

    2009-01-01

    Objective: To compare sonographic and magnetic resonance imaging findings in deep endometriosis with emphasis on intestinal involvement. Materials and methods: Eighteen women aged between 23 and 49 years with clinical suspicion and gynecological signs suggestive of deep endometriosis were submitted to ultrasonography and magnetic resonance imaging for correlation between findings. Results: Ultrasonography detected 40 lesions while magnetic resonance imaging detected 53 lesions in the pelvis. A comparative study has not shown any statistically significant intermethod difference in the detection of lesions (respectively p > 0.19 and p > 0.14). In the rectosigmoid junction, magnetic resonance imaging has detected one (5.6%) lesion, while ultrasonography has detected four lesions (22.2%). In the rectum, ultrasonography has detected eight lesions (44.4%), and magnetic resonance imaging has detected seven lesions (38.9%). Conclusion: The intermethod agreement has not been good for lesions in the rectosigmoid junction, considering that ultrasonography has detected a higher number of lesions in this region, but a lower number of lesions in the pelvis as compared with magnetic resonance imaging. The global comparative analysis has demonstrated no statistically significant intermethod difference in the detection of lesions. Low cost, good tolerability and high availability make ultrasonography a valuable diagnostic tool in cases of deep endometriosis. (author)

  5. Tomographic imaging of 12 fracture samples selected from Olkiluoto deep drillholes

    International Nuclear Information System (INIS)

    Kuva, J.; Voutilainen, M.; Timonen, J.; Aaltonen, I.

    2010-06-01

    Rock samples from Olkiluoto were imaged with X-ray tomography to analyze distributions of mineral components and alteration of rock around different fracture types. Twelve samples were analyzed, which contained three types of fractures, and each sample was scanned with two different resolutions. Three dimensional reconstructions of the samples with four or five distinct mineral components displayed changes in the mineral distribution around previously water conducting fractures, which extended to a depth of several millimeters away from fracture surfaces. In addition, structure of fracture filling minerals is depicted. (orig.)

  6. Deep imaging: how much of the proteome does current top-down technology already resolve?

    Directory of Open Access Journals (Sweden)

    Elise P Wright

    Full Text Available Effective proteome analyses are based on interplay between resolution and detection. It had been claimed that resolution was the main factor limiting the use of two-dimensional gel electrophoresis. Improved protein detection now indicates that this is unlikely to be the case. Using a highly refined protocol, the rat brain proteome was extracted, resolved, and detected. In order to overcome the stain saturation threshold, high abundance protein species were excised from the gel following standard imaging. Gels were then imaged again using longer exposure times, enabling detection of lower abundance, less intensely stained protein species. This resulted in a significant enhancement in the detection of resolved proteins, and a slightly modified digestion protocol enabled effective identification by standard mass spectrometric methods. The data indicate that the resolution required for comprehensive proteome analyses is already available, can assess multiple samples in parallel, and preserve critical information concerning post-translational modifications. Further optimization of staining and detection methods promises additional improvements to this economical, widely accessible and effective top-down approach to proteome analysis.

  7. A deep optical imaging study of the nebular remnants of classical novae

    Science.gov (United States)

    Slavin, A. J.; O'Brien, T. J.; Dunlop, J. S.

    1995-09-01

    An optical imaging study of old nova remnants has revealed previously unobserved features in the shells of 13 classical novae - DQ Her, FH Ser, HR Del, GK Per, V1500 Cyg, T Aur, V533 Her, NQ Vul, V476 Cyg, DK Lac, LV Vul, RW UMi and V450 Cyg. These data indicate a possible correlation between nova speed class and the ellipticity of the resulting remnants - those of faster novae tend to comprise randomly distributed clumps of ejecta superposed on spherically symmetric diffuse material, whilst slower novae produce more structured ellipsoidal remnants with at least one and sometimes several rings of enhanced emission. By measuring the extent of the resolved shells and combining this information with previously published ejection speeds, we use expansion parallax to estimate distances for the 13 novae. Whilst we are able to deduce new information about every nova, it is notable that these observations include the first detections of shells around the old novae V450 Cyg and NQ Vul, and that velocity-resolved images of FH Ser and DQ Her have enabled us to estimate their orbital inclinations. Our observations of DQ Her also show that the main ellipsoidal shell is constricted by three rings and surrounded by a faint halo; this halo contains long tails extending outwards from bright knots, perhaps indicating that during or after outburst a fast inner wind has broken through the fractured principal shell.

  8. Comparison of Laser Doppler Imaging (LDI) and clinical assessment in differentiating between superficial and deep partial thickness burn wounds.

    Science.gov (United States)

    Jan, Saadia Nosheen; Khan, Farid Ahmed; Bashir, Muhammad Mustehsan; Nasir, Muneeb; Ansari, Hamid Hussain; Shami, Hussan Birkhez; Nazir, Umer; Hanif, Asif; Sohail, Muhammad

    2018-03-01

    To compare the accuracy of Laser Doppler Imaging (LDI) and clinical assessment in differentiating between superficial and deep partial thickness burns to decide whether early tangential excision and grafting or conservative management should be employed to optimize burn and patient management. March 2015 to November 2016. Ninety two wounds in 34 patients reporting within 5days of less than 40% burn surface area were included. Unstable patients, pregnant females and those who expired were excluded. The wounds were clinically assessed and LDI done concomitantly Plastic Surgeons blinded to each other's findings. Wound appearance, color, blanching, pain, hair follicle dislodgement were the clinical parameters that distinguished between superficial and deep partial thickness burns. On day 21, the wounds were again assessed for the presence of healing by the same plastic surgeons. The findings were correlated with the initial findings on LDI and clinical assessment and the results statistically analyzed. The data of 92 burn wounds was analyzed using SPSS (ver. 17). Clinical assessment correctly identified the depth of 75 and LDI 83 wounds, giving diagnostic accuracies of 81.52% and 90.21% respectively. The sensitivity of clinical assessment was 81% and of LDI 92.75%, whereas the specificity was 82% for both. The positive predictive value was 93% for clinical assessment and 94% for LDI while the negative predictive value was 59% and 79% respectively. Predictive accuracy of LDI was found to be better than clinical assessment in the prediction of wound healing, the gold standard for wound healing being 21 days. As such it can prove to be a reliable and viable cost effective alternative per se to clinical assessment. Copyright © 2017 Elsevier Ltd and ISBI. All rights reserved.

  9. LYMAN BREAK GALAXIES AT z ∼ 1.8-2.8: GALEX/NUV IMAGING OF THE SUBARU DEEP FIELD

    International Nuclear Information System (INIS)

    Ly, Chun; Malkan, Matthew A.; Woo, Jong-Hak; Treu, Tommaso; Currie, Thayne; Hayashi, Masao; Shimasaku, Kazuhiro; Yoshida, Makiko; Kashikawa, Nobunari; Motohara, Kentaro

    2009-01-01

    A photometric sample of ∼8000 V C i'z' optical data with deep GALEX/NUV imaging of the Subaru Deep Field. Follow-up spectroscopy confirmed 24 LBGs at 1.5 ∼< z ∼< 2.7. Among the optical spectra, 12 have Lyα emission with rest-frame equivalent widths of ∼5-60 A. The success rate for identifying LBGs as NUV-dropouts at 1.5 < z < 2.7 is 86%. The rest-frame UV (1700 A) luminosity function (LF) is constructed from the photometric sample with corrections for stellar contamination and z < 1.5 interlopers (lower limits). The LF is 1.7 ± 0.1 (1.4 ± 0.1 with a hard upper limit on stellar contamination) times higher than those of z ∼ 2 BXs and z ∼ 3 LBGs. Three explanations were considered, and it is argued that significantly underestimating low-z contamination or effective comoving volume is unlikely: the former would be inconsistent with the spectroscopic sample at 93% confidence, and the second explanation would not resolve the discrepancy. The third scenario is that different photometric selection of the samples yields nonidentical galaxy populations, such that some BX galaxies are LBGs and vice versa. This argument is supported by a higher surface density of LBGs at all magnitudes while the redshift distribution of the two populations is nearly identical. This study, when combined with other star formation rate (SFR) density UV measurements from LBG surveys, indicates that there is a rise in the SFR density: a factor of 3-6 (3-10) increase from z ∼ 5 (z ∼ 6) to z ∼ 2, followed by a decrease to z ∼ 0. This result, along with past sub-mm studies that find a peak at z ∼ 2 in their redshift distribution, suggests that z ∼ 2 is the epoch of peak star formation.

  10. Deep Convolutional Neural Networks for Endotracheal Tube Position and X-ray Image Classification: Challenges and Opportunities.

    Science.gov (United States)

    Lakhani, Paras

    2017-08-01

    The goal of this study is to evaluate the efficacy of deep convolutional neural networks (DCNNs) in differentiating subtle, intermediate, and more obvious image differences in radiography. Three different datasets were created, which included presence/absence of the endotracheal (ET) tube (n = 300), low/normal position of the ET tube (n = 300), and chest/abdominal radiographs (n = 120). The datasets were split into training, validation, and test. Both untrained and pre-trained deep neural networks were employed, including AlexNet and GoogLeNet classifiers, using the Caffe framework. Data augmentation was performed for the presence/absence and low/normal ET tube datasets. Receiver operating characteristic (ROC), area under the curves (AUC), and 95% confidence intervals were calculated. Statistical differences of the AUCs were determined using a non-parametric approach. The pre-trained AlexNet and GoogLeNet classifiers had perfect accuracy (AUC 1.00) in differentiating chest vs. abdominal radiographs, using only 45 training cases. For more difficult datasets, including the presence/absence and low/normal position endotracheal tubes, more training cases, pre-trained networks, and data-augmentation approaches were helpful to increase accuracy. The best-performing network for classifying presence vs. absence of an ET tube was still very accurate with an AUC of 0.99. However, for the most difficult dataset, such as low vs. normal position of the endotracheal tube, DCNNs did not perform as well, but achieved a reasonable AUC of 0.81.

  11. Design and implementation of optical imaging and sensor systems for characterization of deep-sea biological camouflage

    Science.gov (United States)

    Haag, Justin Mathew

    The visual ecology of deep-sea animals has long been of scientific interest. In the open ocean, where there is no physical structure to hide within or behind, diverse strategies have evolved to solve the problem of camouflage from a potential predator. Simulations of specific predator-prey scenarios have yielded estimates of the range of possible appearances that an animal may exhibit. However, there is a limited amount of quantitative information available related to both animal appearance and the light field at mesopelagic depths (200 m to 1000 m). To mitigate this problem, novel optical instrumentation, taking advantage of recent technological advances, was developed and is described in this dissertation. In the first half of this dissertation, the appearance of mirrored marine animals is quantitatively evaluated. A portable optical imaging scatterometer was developed to measure angular reflectance, described by the bidirectional reflectance distribution function (BRDF), of biological specimens. The instrument allows for BRDF capture from samples of arbitrary size, over a significant fraction of the reflectance hemisphere. Multiple specimens representing two species of marine animals, collected at mesopelagic depths, were characterized using the scatterometer. Low-dimensional parametric models were developed to simplify use of the data sets, and to validate the BRDF method. Results from principal component analysis confirm that BRDF measurements can be used to study intra- and interspecific variability of mirrored marine animal appearance. Collaborative efforts utilizing the BRDF data sets to develop physically-based scattering models are underway. In the second half of this dissertation, another key part of the deep-sea biological camouflage problem is examined. Two underwater radiometers, capable of low-light measurements, were developed to address the lack of available information related to the deep-sea light field. Quantitative comparison of spectral

  12. Imaging microbial metal metabolism in situ under conditions of the deep-sea hydrothermal vents

    Science.gov (United States)

    Oger, P. M.; Daniel, I.; Simionovici, A.; Picard, A.

    2006-12-01

    High-pressure biotopes are the most widely spread biotopes on Earth. They represent one possible location for the origin of life. They also share striking similarities with extraterrestrial biotopes such as those postulated for Europe or Mars. In absence of light, dissimilatory reduction of metals (DMR) is fueling the ecosystem. Monitoring the metabolism of the deep-sea hydrothermal vent microbial fauna under P, T and chemical conditions relevant to their isolation environment can be difficult because of the confinement and because most spectroscopic probes do not sense metallic ions in solution. We demonstrated the possibility to use Xray spectroscopy to monitor the speciation of metallic species in solution. Experiments were performed at The ESRF using Selenium (Se) detoxification by Agrobacterium tumefaciens as an analog of DMR. The reduction of Se from selenite to the metal was monitored by a combiantion of two Xray spectroscopic techniques (XANES and μXRF). Cells were incubated in the low pressure DAC in growth medium supplemented with 5mM Selenite and incubated under pressures up to 60 Mpa at 30°C for 24h. The evolution of the speciation can be easily monitored and the concentration of each Se species determined from the Xray spectra by linear combinations of standard spectra. Selenite is transformed by the bacterium into a mixture of metal Se and methylated Se after 24 hours. Se detoxification is observed in situ up to at least 25 MPa. The technique, developped for Se can be adapted to monitor other elements more relevant to DMR such as As, Fe or S, which should allow to monitor in situ under controlled pressure and temperature the metabolism of vent organisms. It is also amenable to the monitoring of toxic metals. Xray spectroscopy and the lpDAC are compatible with other spectroscopic techniques, such as Raman, UV or IR spectroscopies, allowing to probe other metabolic activities. Hence, enlarging the range of metabolic information that can be obtained in

  13. Cosmic Infrared Background Fluctuations in Deep Spitzer Infrared Array Camera Images: Data Processing and Analysis

    Science.gov (United States)

    Arendt, Richard; Kashlinsky, A.; Moseley, S.; Mather, J.

    2010-01-01

    This paper provides a detailed description of the data reduction and analysis procedures that have been employed in our previous studies of spatial fluctuation of the cosmic infrared background (CIB) using deep Spitzer Infrared Array Camera observations. The self-calibration we apply removes a strong instrumental signal from the fluctuations that would otherwise corrupt the results. The procedures and results for masking bright sources and modeling faint sources down to levels set by the instrumental noise are presented. Various tests are performed to demonstrate that the resulting power spectra of these fields are not dominated by instrumental or procedural effects. These tests indicate that the large-scale ([greater, similar]30') fluctuations that remain in the deepest fields are not directly related to the galaxies that are bright enough to be individually detected. We provide the parameterization of these power spectra in terms of separate instrument noise, shot noise, and power-law components. We discuss the relationship between fluctuations measured at different wavelengths and depths, and the relations between constraints on the mean intensity of the CIB and its fluctuation spectrum. Consistent with growing evidence that the [approx]1-5 [mu]m mean intensity of the CIB may not be as far above the integrated emission of resolved galaxies as has been reported in some analyses of DIRBE and IRTS observations, our measurements of spatial fluctuations of the CIB intensity indicate the mean emission from the objects producing the fluctuations is quite low ([greater, similar]1 nW m-2 sr-1 at 3-5 [mu]m), and thus consistent with current [gamma]-ray absorption constraints. The source of the fluctuations may be high-z Population III objects, or a more local component of very low luminosity objects with clustering properties that differ from the resolved galaxies. Finally, we discuss the prospects of the upcoming space-based surveys to directly measure the epochs

  14. The Robustness of Tomographically Imaged Broad Plumes in the Deep Mantle: Constraints on Mantle Dynamics

    Science.gov (United States)

    Romanowicz, B. A.; Jiménez-Pérez, H.; Adourian, S.; Karaoglu, H.; French, S.

    2016-12-01

    Existing global 3D shear wave velocity models of the earth's mantle generally rely on simple ray theoretical assumptions regarding seismic wave propagation through a heterogeneous medium, and/or consider a limited number of seismic observables, such as surface wave dispersion and/or travel times of body waves (such as P or S) that are well separated on seismograms. While these assumptions are appropriate for resolving long wavelength structure, as evidenced from the good agreement at low degrees between models published in the last 10 years, it is well established that the assumption of ray theory limits the resolution of smaller scale low velocity structures. We recently developed a global radially anisotropic shear wave velocity model (SEMUCB_WM1, French and Romanowicz, 2014, 2015) based on time domain full waveform inversion of 3-component seismograms, including surface waves and overtones down to 60s period, as well as body waveforms down to 30s. At each iteration, the forward wavefield is calculated using the Spectral Element Method (SEM), which ensures the accurate computation of the misfit function. Inversion is performed using a fast converging Gauss-Newton formalism. The use of information from the entire seismogram, weighted according to energy arrivals, provides a unique illumination of the deep mantle, compensating for the uneven distribution of sources and stations. The most striking features of this model are the broad, vertically oriented plume-like conduits that extend from the core-mantle boundary to at least 1000 km depth in the vicinity of some 20 major hotspots located over the large low shear velocity provinces under the Pacific and Africa. We here present the results of various tests aimed at evaluating the robustness of these features. These include starting from a different initial model, to evaluate the effects of non-linearity in the inversion, as well as synthetic tests aimed at evaluating the recovery of plumes located in the middle of

  15. COSMIC INFRARED BACKGROUND FLUCTUATIONS IN DEEP SPITZER INFRARED ARRAY CAMERA IMAGES: DATA PROCESSING AND ANALYSIS

    International Nuclear Information System (INIS)

    Arendt, Richard G.; Kashlinsky, A.; Moseley, S. H.; Mather, J.

    2010-01-01

    This paper provides a detailed description of the data reduction and analysis procedures that have been employed in our previous studies of spatial fluctuation of the cosmic infrared background (CIB) using deep Spitzer Infrared Array Camera observations. The self-calibration we apply removes a strong instrumental signal from the fluctuations that would otherwise corrupt the results. The procedures and results for masking bright sources and modeling faint sources down to levels set by the instrumental noise are presented. Various tests are performed to demonstrate that the resulting power spectra of these fields are not dominated by instrumental or procedural effects. These tests indicate that the large-scale (∼>30') fluctuations that remain in the deepest fields are not directly related to the galaxies that are bright enough to be individually detected. We provide the parameterization of these power spectra in terms of separate instrument noise, shot noise, and power-law components. We discuss the relationship between fluctuations measured at different wavelengths and depths, and the relations between constraints on the mean intensity of the CIB and its fluctuation spectrum. Consistent with growing evidence that the ∼1-5 μm mean intensity of the CIB may not be as far above the integrated emission of resolved galaxies as has been reported in some analyses of DIRBE and IRTS observations, our measurements of spatial fluctuations of the CIB intensity indicate the mean emission from the objects producing the fluctuations is quite low (∼>1 nW m -2 sr -1 at 3-5 μm), and thus consistent with current γ-ray absorption constraints. The source of the fluctuations may be high-z Population III objects, or a more local component of very low luminosity objects with clustering properties that differ from the resolved galaxies. Finally, we discuss the prospects of the upcoming space-based surveys to directly measure the epochs inhabited by the populations producing these

  16. Deep two-photon microscopic imaging through brain tissue using the second singlet state from fluorescent agent chlorophyll α in spinach leaf.

    Science.gov (United States)

    Shi, Lingyan; Rodríguez-Contreras, Adrián; Budansky, Yury; Pu, Yang; Nguyen, Thien An; Alfano, Robert R

    2014-06-01

    Two-photon (2P) excitation of the second singlet (S₂) state was studied to achieve deep optical microscopic imaging in brain tissue when both the excitation (800 nm) and emission (685 nm) wavelengths lie in the "tissue optical window" (650 to 950 nm). S₂ state technique was used to investigate chlorophyll α (Chl α) fluorescence inside a spinach leaf under a thick layer of freshly sliced rat brain tissue in combination with 2P microscopic imaging. Strong emission at the peak wavelength of 685 nm under the 2P S₂ state of Chl α enabled the imaging depth up to 450 μm through rat brain tissue.

  17. Simulating deep surveys of the Galactic Plane with the Advanced Gamma-ray Imaging System (AGIS)

    Science.gov (United States)

    Funk, Stefan; Digel, Seth

    2009-05-01

    The pioneering survey of the Galactic plane by H.E.S.S., together with the northern complement now underway with VERITAS, has shown the inner Milky Way to be rich in TeV-emitting sources; new source classes have been found among the H.E.S.S. detections and unidentified sources remain. In order to explore optimizations of the design of an Advanced Gamma-ray Imaging System (AGIS)-like instrument for survey science, we constructed a model of the flux and size distributions of Galactic TeV sources, normalized to the H.E.S.S. sources but extrapolated to lower flux levels. We investigated potential outcomes from a survey with the order of magnitude improvement in sensitivity and attendant improvement in angular resolution planned for AGIS. Studies of individual sources and populations found with such a sensitivity survey will advance understanding of astrophysical particle acceleration, source populations, and even high-energy cosmic rays via detection of the low-level TeV diffuse emission in regions of high cosmic-ray densitiy.

  18. Deep lateral notch sign and double notch sign in complete tears of the anterior cruciate ligament: MR imaging evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Grimberg, Alexandre [University of California, San Diego School of Medicine, Division of Musculoskeletal Radiology, Department of Radiology, San Diego, CA (United States); Universidade Federal de Sao Paulo, Department of Diagnostic Imaging, Sao Paulo, SP (Brazil); Shirazian, Hoda; Torshizy, Hamid; Smitaman, Edward; Resnick, Donald L. [University of California, San Diego School of Medicine, Division of Musculoskeletal Radiology, Department of Radiology, San Diego, CA (United States); Chang, Eric Y. [Veterans Administrations San Diego Healthcare Systems, Osteoradiology Section, Department of Radiology, San Diego, CA (United States); University of California, San Diego School of Medicine, Division of Musculoskeletal Radiology, Department of Radiology, San Diego, CA (United States)

    2014-11-20

    To systematically compare the notches of the lateral femoral condyle (LFC) in patients with and without complete tears of the anterior cruciate ligament (ACL) in MR studies by (1) evaluating the dimensions of the lateral condylopatellar sulcus; (2) evaluating the presence and appearance of an extra or a double notch and its association with such tears. This retrospective study was approved by our institutional review board, and informed written patient consent was waived. In 58 cases of complete ACL tears and 37 control cases with intact ACL, the number of notches on the LFC was determined, and the depth and anteroposterior (AP) length of each notch were measured in each third of the LFC. The chi-square test, t-test, and logistic regression model were used to analyze demographic data and image findings, as appropriate. Presence of more than one notch demonstrated a sensitivity of 17.2 %, specificity of 100 %, positive predictive value of 100 %, and negative predictive value of 43.5 % for detecting a complete ACL tear. Lateral third depth measurement (p = 0.028) was a significant associated finding with a complete ACL tear. A deep notch in the lateral third of the LFC is a significant associated finding with a complete ACL tear when compared with an ACL-intact control group, and the presence of more than one notch is a specific but insensitive sign of such a tear. (orig.)

  19. Two-phase deep convolutional neural network for reducing class skewness in histopathological images based breast cancer detection.

    Science.gov (United States)

    Wahab, Noorul; Khan, Asifullah; Lee, Yeon Soo

    2017-06-01

    Different types of breast cancer are affecting lives of women across the world. Common types include Ductal carcinoma in situ (DCIS), Invasive ductal carcinoma (IDC), Tubular carcinoma, Medullary carcinoma, and Invasive lobular carcinoma (ILC). While detecting cancer, one important factor is mitotic count - showing how rapidly the cells are dividing. But the class imbalance problem, due to the small number of mitotic nuclei in comparison to the overwhelming number of non-mitotic nuclei, affects the performance of classification models. This work presents a two-phase model to mitigate the class biasness issue while classifying mitotic and non-mitotic nuclei in breast cancer histopathology images through a deep convolutional neural network (CNN). First, nuclei are segmented out using blue ratio and global binary thresholding. In Phase-1 a CNN is then trained on the segmented out 80×80 pixel patches based on a standard dataset. Hard non-mitotic examples are identified and augmented; mitotic examples are oversampled by rotation and flipping; whereas non-mitotic examples are undersampled by blue ratio histogram based k-means clustering. Based on this information from Phase-1, the dataset is modified for Phase-2 in order to reduce the effects of class imbalance. The proposed CNN architecture and data balancing technique yielded an F-measure of 0.79, and outperformed all the methods relying on specific handcrafted features, as well as those using a combination of handcrafted and CNN-generated features. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. The Anisotropy of the Microwave Background to l = 3500: Deep Field Observations with the Cosmic Background Imager

    Science.gov (United States)

    Mason, B. S.; Pearson, T. J.; Readhead, A. C. S.; Shepherd, M. C.; Sievers, J.; Udomprasert, P. S.; Cartwright, J. K.; Farmer, A. J.; Padin, S.; Myers, S. T.; hide

    2002-01-01

    We report measurements of anisotropy in the cosmic microwave background radiation over the multipole range l approximately 200 (right arrow) 3500 with the Cosmic Background Imager based on deep observations of three fields. These results confirm the drop in power with increasing l first reported in earlier measurements with this instrument, and extend the observations of this decline in power out to l approximately 2000. The decline in power is consistent with the predicted damping of primary anisotropies. At larger multipoles, l = 2000-3500, the power is 3.1 sigma greater than standard models for intrinsic microwave background anisotropy in this multipole range, and 3.5 sigma greater than zero. This excess power is not consistent with expected levels of residual radio source contamination but, for sigma 8 is approximately greater than 1, is consistent with predicted levels due to a secondary Sunyaev-Zeldovich anisotropy. Further observations are necessary to confirm the level of this excess and, if confirmed, determine its origin.

  1. 3-D Deep Penetration Neutron Imaging of Thick Absorgin and Diffusive Objects Using Transport Theory

    Energy Technology Data Exchange (ETDEWEB)

    Ragusa, Jean; Bangerth, Wolfgang

    2011-08-01

    locations where measurements were collected, the optical thickness of the domain, the amount of signal noise and signal bias applied to the measurements and the initial guess for the cross section distribution. All of these factors were explored for problems with and without scattering. Increasing the number of source and measurement locations and experiments generally was more successful at reconstructing optically thicker domains while producing less error in the image. The maximum optical thickness that could be reconstructed with this method was ten mean free paths for pure absorber and two mean free paths for scattering problems. Applying signal noise and signal bias to the measured fluxes produced more error in the produced image. Generally, Newtons method was more successful at reconstructing domains from an initial guess for the cross sections that was greater in magnitude than their true values than from an initial guess that was lower in magnitude.

  2. THE MULTIWAVELENGTH SURVEY BY YALE-CHILE (MUSYC): DEEP MEDIUM-BAND OPTICAL IMAGING AND HIGH-QUALITY 32-BAND PHOTOMETRIC REDSHIFTS IN THE ECDF-S

    International Nuclear Information System (INIS)

    Cardamone, Carolin N.; Van Dokkum, Pieter G.; Urry, C. Megan; Brammer, Gabriel; Taniguchi, Yoshi; Gawiser, Eric; Bond, Nicholas; Taylor, Edward; Damen, Maaike; Treister, Ezequiel; Cobb, Bethany E.; Schawinski, Kevin; Lira, Paulina; Murayama, Takashi; Saito, Tomoki; Sumikawa, Kentaro

    2010-01-01

    We present deep optical 18-medium-band photometry from the Subaru telescope over the ∼30' x 30' Extended Chandra Deep Field-South, as part of the Multiwavelength Survey by Yale-Chile (MUSYC). This field has a wealth of ground- and space-based ancillary data, and contains the GOODS-South field and the Hubble Ultra Deep Field. We combine the Subaru imaging with existing UBVRIzJHK and Spitzer IRAC images to create a uniform catalog. Detecting sources in the MUSYC 'BVR' image we find ∼40,000 galaxies with R AB 3.5. For 0.1 < z < 1.2, we find a 1σ scatter in Δz/(1 + z) of 0.007, similar to results obtained with a similar filter set in the COSMOS field. As a demonstration of the data quality, we show that the red sequence and blue cloud can be cleanly identified in rest-frame color-magnitude diagrams at 0.1 < z < 1.2. We find that ∼20% of the red sequence galaxies show evidence of dust emission at longer rest-frame wavelengths. The reduced images, photometric catalog, and photometric redshifts are provided through the public MUSYC Web site.

  3. Thalamo–cortical network underlying deep brain stimulation of centromedian thalamic nuclei in intractable epilepsy: a multimodal imaging analysis

    Directory of Open Access Journals (Sweden)

    Kim SH

    2017-10-01

    Full Text Available Seong Hoon Kim,1 Sung Chul Lim,1 Dong Won Yang,1 Jeong Hee Cho,1 Byung-Chul Son,2 Jiyeon Kim,3 Seung Bong Hong,4 Young-Min Shon4 1Department of Neurology, 2Department of Neurosurgery, Catholic Neuroscience Institute, College of Medicine, The Catholic University of Korea, Seoul, 3Department of Neurology, Korea University Ansan Hospital, College of Medicine, Korea University, Ansan, 4Department of Neurology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea Objective: Deep brain stimulation (DBS of the centromedian thalamic nucleus (CM can be an alternative treatment option for intractable epilepsy patients. Since CM may be involved in widespread cortico-subcortical networks, identification of the cortical sub-networks specific to the target stimuli may provide further understanding on the underlying mechanisms of CM DBS. Several brain structures have distinguishing brain connections that may be related to the pivotal propagation and subsequent clinical effect of DBS.Methods: To explore core structures and their connections relevant to CM DBS, we applied electroencephalogram (EEG and diffusion tensor imaging (DTI to 10 medically intractable patients – three generalized epilepsy (GE and seven multifocal epilepsy (MFE patients unsuitable for resective surgery. Spatiotemporal activation pattern was mapped from scalp EEG by delivering low-frequency stimuli (5 Hz. Structural connections between the CM and the cortical activation spots were assessed using DTI.Results: We confirmed an average 72% seizure reduction after CM DBS and its clinical efficiency remained consistent during the observation period (mean 21 months. EEG data revealed sequential source propagation from the anterior cingulate, followed by the frontotemporal regions bilaterally. In addition, maximal activation was found in the left cingulate gyrus and the right medial frontal cortex during the right and left CM stimulation, respectively

  4. A deep-learning classifier identifies patients with clinical heart failure using whole-slide images of H&E tissue.

    Directory of Open Access Journals (Sweden)

    Jeffrey J Nirschl

    Full Text Available Over 26 million people worldwide suffer from heart failure annually. When the cause of heart failure cannot be identified, endomyocardial biopsy (EMB represents the gold-standard for the evaluation of disease. However, manual EMB interpretation has high inter-rater variability. Deep convolutional neural networks (CNNs have been successfully applied to detect cancer, diabetic retinopathy, and dermatologic lesions from images. In this study, we develop a CNN classifier to detect clinical heart failure from H&E stained whole-slide images from a total of 209 patients, 104 patients were used for training and the remaining 105 patients for independent testing. The CNN was able to identify patients with heart failure or severe pathology with a 99% sensitivity and 94% specificity on the test set, outperforming conventional feature-engineering approaches. Importantly, the CNN outperformed two expert pathologists by nearly 20%. Our results suggest that deep learning analytics of EMB can be used to predict cardiac outcome.

  5. THE ACS NEARBY GALAXY SURVEY TREASURY

    International Nuclear Information System (INIS)

    Dalcanton, Julianne J.; Williams, Benjamin F.; Rosema, Keith; Gogarten, Stephanie M.; Christensen, Charlotte; Gilbert, Karoline; Hodge, Paul; Seth, Anil C.; Dolphin, Andrew; Holtzman, Jon; Skillman, Evan D.; Weisz, Daniel; Cole, Andrew; Girardi, Leo; Karachentsev, Igor D.; Olsen, Knut; Freeman, Ken; Gallart, Carme; Harris, Jason; De Jong, Roelof S.

    2009-01-01

    The ACS Nearby Galaxy Survey Treasury (ANGST) is a systematic survey to establish a legacy of uniform multi-color photometry of resolved stars for a volume-limited sample of nearby galaxies (D 4 in luminosity and star formation rate. The survey data consist of images taken with the Advanced Camera for Surveys (ACS) on the Hubble Space Telescope (HST), supplemented with archival data and new Wide Field Planetary Camera 2 (WFPC2) imaging taken after the failure of ACS. Survey images include wide field tilings covering the full radial extent of each galaxy, and single deep pointings in uncrowded regions of the most massive galaxies in the volume. The new wide field imaging in ANGST reaches median 50% completenesses of m F475W = 28.0 mag, m F606W = 27.3 mag, and m F814W = 27.3 mag, several magnitudes below the tip of the red giant branch (TRGB). The deep fields reach magnitudes sufficient to fully resolve the structure in the red clump. The resulting photometric catalogs are publicly accessible and contain over 34 million photometric measurements of >14 million stars. In this paper we present the details of the sample selection, imaging, data reduction, and the resulting photometric catalogs, along with an analysis of the photometric uncertainties (systematic and random), for both ACS and WFPC2 imaging. We also present uniformly derived relative distances measured from the apparent magnitude of the TRGB.

  6. Pre-operative CT angiography and three-dimensional image post processing for deep inferior epigastric perforator flap breast reconstructive surgery.

    Science.gov (United States)

    Lam, D L; Mitsumori, L M; Neligan, P C; Warren, B H; Shuman, W P; Dubinsky, T J

    2012-12-01

    Autologous breast reconstructive surgery with deep inferior epigastric artery (DIEA) perforator flaps has become the mainstay for breast reconstructive surgery. CT angiography and three-dimensional image post processing can depict the number, size, course and location of the DIEA perforating arteries for the pre-operative selection of the best artery to use for the tissue flap. Knowledge of the location and selection of the optimal perforating artery shortens operative times and decreases patient morbidity.

  7. Deep learning with Python

    CERN Document Server

    Chollet, Francois

    2018-01-01

    DESCRIPTION Deep learning is applicable to a widening range of artificial intelligence problems, such as image classification, speech recognition, text classification, question answering, text-to-speech, and optical character recognition. Deep Learning with Python is structured around a series of practical code examples that illustrate each new concept introduced and demonstrate best practices. By the time you reach the end of this book, you will have become a Keras expert and will be able to apply deep learning in your own projects. KEY FEATURES • Practical code examples • In-depth introduction to Keras • Teaches the difference between Deep Learning and AI ABOUT THE TECHNOLOGY Deep learning is the technology behind photo tagging systems at Facebook and Google, self-driving cars, speech recognition systems on your smartphone, and much more. AUTHOR BIO Francois Chollet is the author of Keras, one of the most widely used libraries for deep learning in Python. He has been working with deep neural ...

  8. Differentiation of deep subcortical infarction using high-resolution vessel wall MR imaging of middle cerebral artery

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Yun Jung; Choi, Byung Se; Jung, Cheol Kyu; Yoon, Yeon Hong; Sunwoo, Leonard; Kim, Jae Hyoung; Bae, Hee Joon [Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of)

    2017-11-15

    To evaluate the utility of high-resolution vessel wall imaging (HR-VWI) of middle cerebral artery (MCA), and to compare HR-VWI findings between striatocapsular infarction (SC-I) and lenticulostriate infarction (LS-I). This retrospective study was approved by the Institutional Review Board, and informed consent was waived. From July 2009 to February 2012, 145 consecutive patients with deep subcortical infarctions (SC-I, n = 81; LS-I, n = 64) who underwent HR-VWI were included in this study. The degree of MCA stenosis and the characteristics of MCA plaque (presence, eccentricity, location, extent, T2-high signal intensity [T2-HSI], and plaque enhancement) were analyzed, and compared between SC-I and LS-I, using Fisher's exact test. Stenosis was more severe in SC-I than in LS-I (p = 0.040). MCA plaque was more frequent in SC-I than in LS-I (p = 0.028), having larger plaque extent (p = 0.001), more T2-HSI (p = 0.001), and more plaque enhancement (p = 0.002). The eccentricity and location of the plaque showed no significant difference between the two groups.Both SC-I and LS-I have similar HR-VWI findings of the MCA plaque, but SC-I had more frequent, larger plaques with greater T2-HSI and enhancement. This suggests that HR-VWI may have a promising role in assisting the differentiation of underlying pathophysiological mechanism between SC-I and LS-I.

  9. Deep thermal infrared imaging of HR 8799 bcde: new atmospheric constraints and limits on a fifth planet

    Energy Technology Data Exchange (ETDEWEB)

    Currie, Thayne; Cloutier, Ryan; Jayawardhana, Ray [Department of Astronomy and Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4 (Canada); Burrows, Adam [Department of Astrophysical Science, Princeton University, 4 Ivy Lane, Princeton, NJ 08544 (United States); Girard, Julien H. [European Southern Observatory, Alonso de Córdova 3107, Vitacura, Casilla 19001, Santiago (Chile); Fukagawa, Misato [Graduate School of Science, Osaka University, 1-1 Machikaneyama, Toyonaka, Osaka 560-0043 (Japan); Sorahana, Satoko [Department of Astronomy, Graduate School of Science, The University of Tokyo, Bunkyo-ku, Tokyo 113-0033 (Japan); Kuchner, Marc [Exoplanets and Stellar Astrophysics Laboratory, NASA Goddard Space Flight Center, Code 667, Greenbelt, MD 20771 (United States); Kenyon, Scott J. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Madhusudhan, Nikku [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Itoh, Yoichi [Nishi-Harima Astronomical Observatory, Center for Astronomy, University of Hyago, 407-2 Nishigaichi, Sayo, Hyogo 679-5313 (Japan); Matsumura, Soko [School of Engineering, Physics, and Mathematics, University of Dundee, Dundee DD1 4HN (United Kingdom); Pyo, Tae-Soo [National Astronomical Observatory of Japan, 650 N. Aohoku Place, Hilo, HI 96720 (United States)

    2014-11-10

    We present new L' (3.8 μm) and Brα (4.05 μm) data and reprocessed archival L' data for the young, planet-hosting star HR 8799 obtained with Keck/NIRC2, VLT/NaCo, and Subaru/IRCS. We detect all four HR 8799 planets in each data set at a moderate to high signal-to-noise ratio (S/N ≳ 6-15). We fail to identify a fifth planet, 'HR 8799 f', at r < 15 AU at a 5σ confidence level: one suggestive, marginally significant residual at 0.''2 is most likely a point-spread function artifact. Assuming companion ages of 30 Myr and the Baraffe planet cooling models, we rule out an HR 8799 f with a mass of 5 M{sub J} (7 M{sub J} ), 7 M{sub J} (10 M{sub J} ), or 12 M{sub J} (13 M{sub J} ) at r {sub proj} ∼ 12 AU, 9 AU, and 5 AU, respectively. All four HR 8799 planets have red early T dwarf-like L' – [4.05] colors, suggesting that their spectral energy distributions peak in between the L' and M' broadband filters. We find no statistically significant difference in HR 8799 cde's color. Atmosphere models assuming thick, patchy clouds appear to better match HR 8799 bcde's photometry than models assuming a uniform cloud layer. While non-equilibrium carbon chemistry is required to explain HR 8799 b and c's photometry/spectra, evidence for it from HR 8799 d and e's photometry is weaker. Future, deep-IR spectroscopy/spectrophotometry with the Gemini Planet Imager, SCExAO/CHARIS, and other facilities may clarify whether the planets are chemically similar or heterogeneous.

  10. Deep thermal infrared imaging of HR 8799 bcde: new atmospheric constraints and limits on a fifth planet

    International Nuclear Information System (INIS)

    Currie, Thayne; Cloutier, Ryan; Jayawardhana, Ray; Burrows, Adam; Girard, Julien H.; Fukagawa, Misato; Sorahana, Satoko; Kuchner, Marc; Kenyon, Scott J.; Madhusudhan, Nikku; Itoh, Yoichi; Matsumura, Soko; Pyo, Tae-Soo

    2014-01-01

    We present new L' (3.8 μm) and Brα (4.05 μm) data and reprocessed archival L' data for the young, planet-hosting star HR 8799 obtained with Keck/NIRC2, VLT/NaCo, and Subaru/IRCS. We detect all four HR 8799 planets in each data set at a moderate to high signal-to-noise ratio (S/N ≳ 6-15). We fail to identify a fifth planet, 'HR 8799 f', at r < 15 AU at a 5σ confidence level: one suggestive, marginally significant residual at 0.''2 is most likely a point-spread function artifact. Assuming companion ages of 30 Myr and the Baraffe planet cooling models, we rule out an HR 8799 f with a mass of 5 M J (7 M J ), 7 M J (10 M J ), or 12 M J (13 M J ) at r proj ∼ 12 AU, 9 AU, and 5 AU, respectively. All four HR 8799 planets have red early T dwarf-like L' – [4.05] colors, suggesting that their spectral energy distributions peak in between the L' and M' broadband filters. We find no statistically significant difference in HR 8799 cde's color. Atmosphere models assuming thick, patchy clouds appear to better match HR 8799 bcde's photometry than models assuming a uniform cloud layer. While non-equilibrium carbon chemistry is required to explain HR 8799 b and c's photometry/spectra, evidence for it from HR 8799 d and e's photometry is weaker. Future, deep-IR spectroscopy/spectrophotometry with the Gemini Planet Imager, SCExAO/CHARIS, and other facilities may clarify whether the planets are chemically similar or heterogeneous.

  11. Impact of deep learning on the normalization of reconstruction kernel effects in imaging biomarker quantification: a pilot study in CT emphysema

    Science.gov (United States)

    Jin, Hyeongmin; Heo, Changyong; Kim, Jong Hyo

    2018-02-01

    Differing reconstruction kernels are known to strongly affect the variability of imaging biomarkers and thus remain as a barrier in translating the computer aided quantification techniques into clinical practice. This study presents a deep learning application to CT kernel conversion which converts a CT image of sharp kernel to that of standard kernel and evaluates its impact on variability reduction of a pulmonary imaging biomarker, the emphysema index (EI). Forty cases of low-dose chest CT exams obtained with 120kVp, 40mAs, 1mm thickness, of 2 reconstruction kernels (B30f, B50f) were selected from the low dose lung cancer screening database of our institution. A Fully convolutional network was implemented with Keras deep learning library. The model consisted of symmetric layers to capture the context and fine structure characteristics of CT images from the standard and sharp reconstruction kernels. Pairs of the full-resolution CT data set were fed to input and output nodes to train the convolutional network to learn the appropriate filter kernels for converting the CT images of sharp kernel to standard kernel with a criterion of measuring the mean squared error between the input and target images. EIs (RA950 and Perc15) were measured with a software package (ImagePrism Pulmo, Seoul, South Korea) and compared for the data sets of B50f, B30f, and the converted B50f. The effect of kernel conversion was evaluated with the mean and standard deviation of pair-wise differences in EI. The population mean of RA950 was 27.65 +/- 7.28% for B50f data set, 10.82 +/- 6.71% for the B30f data set, and 8.87 +/- 6.20% for the converted B50f data set. The mean of pair-wise absolute differences in RA950 between B30f and B50f is reduced from 16.83% to 1.95% using kernel conversion. Our study demonstrates the feasibility of applying the deep learning technique for CT kernel conversion and reducing the kernel-induced variability of EI quantification. The deep learning model has a

  12. Direct Imaging Confirmation and Characterization of a Dust-Enshrouded Candidate Exoplanet Orbiting Fomalhaut

    OpenAIRE

    Currie, Thayne; Debes, John; Rodigas, Timothy J.; Burrows, Adam; Itoh, Yoichi; Fukagawa, Misato; Kenyon, Scott; Kuchner, Marc; Matsumura, Soko

    2012-01-01

    We present Subaru/IRCS J band data for Fomalhaut and a (re)reduction of archival 2004--2006 HST/ACS data first presented by Kalas et al. (2008). We confirm the existence of a candidate exoplanet, Fomalhaut b, in both the 2004 and 2006 F606W data sets at a high signal-to-noise. Additionally, we confirm the detection at F814W and present a new detection in F435W. Fomalhaut b's space motion may be consistent with it being in an apsidally-aligned, non debris ring-crossing orbit, although new astr...

  13. Mixed deep learning and natural language processing method for fake-food image recognition and standardization to help automated dietary assessment.

    Science.gov (United States)

    Mezgec, Simon; Eftimov, Tome; Bucher, Tamara; Koroušić Seljak, Barbara

    2018-04-06

    The present study tested the combination of an established and a validated food-choice research method (the 'fake food buffet') with a new food-matching technology to automate the data collection and analysis. The methodology combines fake-food image recognition using deep learning and food matching and standardization based on natural language processing. The former is specific because it uses a single deep learning network to perform both the segmentation and the classification at the pixel level of the image. To assess its performance, measures based on the standard pixel accuracy and Intersection over Union were applied. Food matching firstly describes each of the recognized food items in the image and then matches the food items with their compositional data, considering both their food names and their descriptors. The final accuracy of the deep learning model trained on fake-food images acquired by 124 study participants and providing fifty-five food classes was 92·18 %, while the food matching was performed with a classification accuracy of 93 %. The present findings are a step towards automating dietary assessment and food-choice research. The methodology outperforms other approaches in pixel accuracy, and since it is the first automatic solution for recognizing the images of fake foods, the results could be used as a baseline for possible future studies. As the approach enables a semi-automatic description of recognized food items (e.g. with respect to FoodEx2), these can be linked to any food composition database that applies the same classification and description system.

  14. NOAA TIFF Image - 3m Bathymetry Slope, Florida Deep Coral Areas - Lost Coast Explorer - (2010), UTM 17N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a unified GeoTiff with 3x3 meter cell size representing the slope (in degrees) of several deep coral priority areas off the Atlantic Coast of...

  15. NOAA TIFF Image - 3m Backscatter Mosaic, Florida Deep Coral Areas - Lost Coast Explorer - (2010), UTM 17N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a unified GeoTiff with 3x3 meter cell size representing the backscatter (intensity) of several deep coral priority areas off the Atlantic Coast...

  16. NOAA TIFF Image - 3m Miami Slope, Florida Deep Coral Areas - Lost Coast Explorer - (2010), UTM 17N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a unified GeoTiff with 3x3 meter cell size representing the slope (in degrees) of several deep coral priority areas off the Atlantic Coast of...

  17. NOAA TIFF Image - 3m Bathymetry Slope, Florida Deep Coral Areas - Lost Coast Explorer - (2010), UTM 17N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a unified GeoTiff with 3x3 meter cell size representing bathymetry of several deep coral priority areas off the Atlantic Coast of Florida,...

  18. NOAA TIFF Image - 3m Bathymetry Mosaic, Florida Deep Coral Areas - Lost Coast Explorer - (2010), UTM 17N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a unified GeoTiff with 3x3 meter cell size representing bathymetry of several deep coral priority areas off the Atlantic Coast of Florida,...

  19. NOAA TIFF Image - 3m Bathymetry, Florida Deep Coral Areas (Jacksonville) - Lost Coast Explorer - (2010), UTM 17N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a unified GeoTiff with 3x3 meter cell size representing bathymetry of several deep coral priority areas off the Atlantic Coast of Florida,...

  20. NOAA TIFF Image - 3m Bathymetric Rugosity, Florida Deep Coral Areas - Lost Coast Explorer - (2011), UTM 17N NAD83

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a unified GeoTiff with 3x3 meter cell size representing the rugosity of several deep coral priority areas off the Atlantic Coast of Florida,...

  1. Automated Whole-Body Bone Lesion Detection for Multiple Myeloma on 68Ga-Pentixafor PET/CT Imaging Using Deep Learning Methods.

    Science.gov (United States)

    Xu, Lina; Tetteh, Giles; Lipkova, Jana; Zhao, Yu; Li, Hongwei; Christ, Patrick; Piraud, Marie; Buck, Andreas; Shi, Kuangyu; Menze, Bjoern H

    2018-01-01

    The identification of bone lesions is crucial in the diagnostic assessment of multiple myeloma (MM). 68 Ga-Pentixafor PET/CT can capture the abnormal molecular expression of CXCR-4 in addition to anatomical changes. However, whole-body detection of dozens of lesions on hybrid imaging is tedious and error prone. It is even more difficult to identify lesions with a large heterogeneity. This study employed deep learning methods to automatically combine characteristics of PET and CT for whole-body MM bone lesion detection in a 3D manner. Two convolutional neural networks (CNNs), V-Net and W-Net, were adopted to segment and detect the lesions. The feasibility of deep learning for lesion detection on 68 Ga-Pentixafor PET/CT was first verified on digital phantoms generated using realistic PET simulation methods. Then the proposed methods were evaluated on real 68 Ga-Pentixafor PET/CT scans of MM patients. The preliminary results showed that deep learning method can leverage multimodal information for spatial feature representation, and W-Net obtained the best result for segmentation and lesion detection. It also outperformed traditional machine learning methods such as random forest classifier (RF), k -Nearest Neighbors ( k -NN), and support vector machine (SVM). The proof-of-concept study encourages further development of deep learning approach for MM lesion detection in population study.

  2. Automated Whole-Body Bone Lesion Detection for Multiple Myeloma on 68Ga-Pentixafor PET/CT Imaging Using Deep Learning Methods

    Directory of Open Access Journals (Sweden)

    Lina Xu

    2018-01-01

    Full Text Available The identification of bone lesions is crucial in the diagnostic assessment of multiple myeloma (MM. 68Ga-Pentixafor PET/CT can capture the abnormal molecular expression of CXCR-4 in addition to anatomical changes. However, whole-body detection of dozens of lesions on hybrid imaging is tedious and error prone. It is even more difficult to identify lesions with a large heterogeneity. This study employed deep learning methods to automatically combine characteristics of PET and CT for whole-body MM bone lesion detection in a 3D manner. Two convolutional neural networks (CNNs, V-Net and W-Net, were adopted to segment and detect the lesions. The feasibility of deep learning for lesion detection on 68Ga-Pentixafor PET/CT was first verified on digital phantoms generated using realistic PET simulation methods. Then the proposed methods were evaluated on real 68Ga-Pentixafor PET/CT scans of MM patients. The preliminary results showed that deep learning method can leverage multimodal information for spatial feature representation, and W-Net obtained the best result for segmentation and lesion detection. It also outperformed traditional machine learning methods such as random forest classifier (RF, k-Nearest Neighbors (k-NN, and support vector machine (SVM. The proof-of-concept study encourages further development of deep learning approach for MM lesion detection in population study.

  3. Real-Time Ultrasound-Guided Catheter Navigation for Approaching Deep-Seated Brain Lesions: Role of Intraoperative Neurosonography with and without Fusion with Magnetic Resonance Imaging.

    Science.gov (United States)

    Manjila, Sunil; Karhade, Aditya; Phi, Ji Hoon; Scott, R Michael; Smith, Edward R

    2017-01-01

    Brain shift during the exposure of cranial lesions may reduce the accuracy of frameless stereotaxy. We describe a rapid, safe, and effective method to approach deep-seated brain lesions using real-time intraoperative ultrasound placement of a catheter to mark the dissection trajectory to the lesion. With Institutional Review Board approval, we retrospectively reviewed the radiographic, pathologic, and intraoperative data of 11 pediatric patients who underwent excision of 12 lesions by means of this technique. Full data sets were available for 12 lesions in 11 patients. Ten lesions were tumors and 2 were cavernous malformations. Lesion locations included the thalamus (n = 4), trigone (n = 3), mesial temporal lobe (n = 3), and deep white matter (n = 2). Catheter placement was successful in all patients, and the median time required for the procedure was 3 min (range 2-5 min). There were no complications related to catheter placement. The median diameter of surgical corridors on postresection magnetic resonance imaging was 6.6 mm (range 3.0-12.1 mm). Use of real-time ultrasound guidance to place a catheter to aid in the dissection to reach a deep-seated brain lesion provides advantages complementary to existing techniques, such as frameless stereotaxy. The catheter insertion technique described here provides a quick, accurate, and safe method for reaching deep-seated lesions. © 2017 S. Karger AG, Basel.

  4. Localization and diagnosis framework for pediatric cataracts based on slit-lamp images using deep features of a convolutional neural network

    Science.gov (United States)

    Zhang, Kai; Long, Erping; Cui, Jiangtao; Zhu, Mingmin; An, Yingying; Zhang, Jia; Liu, Zhenzhen; Lin, Zhuoling; Li, Xiaoyan; Chen, Jingjing; Cao, Qianzhong; Li, Jing; Wu, Xiaohang; Wang, Dongni

    2017-01-01

    Slit-lamp images play an essential role for diagnosis of pediatric cataracts. We present a computer vision-based framework for the automatic localization and diagnosis of slit-lamp images by identifying the lens region of interest (ROI) and employing a deep learning convolutional neural network (CNN). First, three grading degrees for slit-lamp images are proposed in conjunction with three leading ophthalmologists. The lens ROI is located in an automated manner in the original image using two successive applications of Candy detection and the Hough transform, which are cropped, resized to a fixed size and used to form pediatric cataract datasets. These datasets are fed into the CNN to extract high-level features and implement automatic classification and grading. To demonstrate the performance and effectiveness of the deep features extracted in the CNN, we investigate the features combined with support vector machine (SVM) and softmax classifier and compare these with the traditional representative methods. The qualitative and quantitative experimental results demonstrate that our proposed method offers exceptional mean accuracy, sensitivity and specificity: classification (97.07%, 97.28%, and 96.83%) and a three-degree grading area (89.02%, 86.63%, and 90.75%), density (92.68%, 91.05%, and 93.94%) and location (89.28%, 82.70%, and 93.08%). Finally, we developed and deployed a potential automatic diagnostic software for ophthalmologists and patients in clinical applications to implement the validated model. PMID:28306716

  5. Localization and diagnosis framework for pediatric cataracts based on slit-lamp images using deep features of a convolutional neural network.

    Directory of Open Access Journals (Sweden)

    Xiyang Liu

    Full Text Available Slit-lamp images play an essential role for diagnosis of pediatric cataracts. We present a computer vision-based framework for the automatic localization and diagnosis of slit-lamp images by identifying the lens region of interest (ROI and employing a deep learning convolutional neural network (CNN. First, three grading degrees for slit-lamp images are proposed in conjunction with three leading ophthalmologists. The lens ROI is located in an automated manner in the original image using two successive applications of Candy detection and the Hough transform, which are cropped, resized to a fixed size and used to form pediatric cataract datasets. These datasets are fed into the CNN to extract high-level features and implement automatic classification and grading. To demonstrate the performance and effectiveness of the deep features extracted in the CNN, we investigate the features combined with support vector machine (SVM and softmax classifier and compare these with the traditional representative methods. The qualitative and quantitative experimental results demonstrate that our proposed method offers exceptional mean accuracy, sensitivity and specificity: classification (97.07%, 97.28%, and 96.83% and a three-degree grading area (89.02%, 86.63%, and 90.75%, density (92.68%, 91.05%, and 93.94% and location (89.28%, 82.70%, and 93.08%. Finally, we developed and deployed a potential automatic diagnostic software for ophthalmologists and patients in clinical applications to implement the validated model.

  6. Application of Deep Learning of Multi-Temporal SENTINEL-1 Images for the Classification of Coastal Vegetation Zone of the Danube Delta

    Science.gov (United States)

    Niculescu, S.; Ienco, D.; Hanganu, J.

    2018-04-01

    Land cover is a fundamental variable for regional planning, as well as for the study and understanding of the environment. This work propose a multi-temporal approach relying on a fusion of radar multi-sensor data and information collected by the latest sensor (Sentinel-1) with a view to obtaining better results than traditional image processing techniques. The Danube Delta is the site for this work. The spatial approach relies on new spatial analysis technologies and methodologies: Deep Learning of multi-temporal Sentinel-1. We propose a deep learning network for image classification which exploits the multi-temporal characteristic of Sentinel-1 data. The model we employ is a Gated Recurrent Unit (GRU) Network, a recurrent neural network that explicitly takes into account the time dimension via a gated mechanism to perform the final prediction. The main quality of the GRU network is its ability to consider only the important part of the information coming from the temporal data discarding the irrelevant information via a forgetting mechanism. We propose to use such network structure to classify a series of images Sentinel-1 (20 Sentinel-1 images acquired between 9.10.2014 and 01.04.2016). The results are compared with results of the classification of Random Forest.

  7. Deep Learning in Neuroradiology.

    Science.gov (United States)

    Zaharchuk, G; Gong, E; Wintermark, M; Rubin, D; Langlotz, C P

    2018-02-01

    Deep learning is a form of machine learning using a convolutional neural network architecture that shows tremendous promise for imaging applications. It is increasingly being adapted from its original demonstration in computer vision applications to medical imaging. Because of the high volume and wealth of multimodal imaging information acquired in typical studies, neuroradiology is poised to be an early adopter of deep learning. Compelling deep learning research applications have been demonstrated, and their use is likely to grow rapidly. This review article describes the reasons, outlines the basic methods used to train and test deep learning models, and presents a brief overview of current and potential clinical applications with an emphasis on how they are likely to change future neuroradiology practice. Facility with these methods among neuroimaging researchers and clinicians will be important to channel and harness the vast potential of this new method. © 2018 by American Journal of Neuroradiology.

  8. Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans.

    Science.gov (United States)

    Cheng, Jie-Zhi; Ni, Dong; Chou, Yi-Hong; Qin, Jing; Tiu, Chui-Mei; Chang, Yeun-Chung; Huang, Chiun-Sheng; Shen, Dinggang; Chen, Chung-Ming

    2016-04-15

    This paper performs a comprehensive study on the deep-learning-based computer-aided diagnosis (CADx) for the differential diagnosis of benign and malignant nodules/lesions by avoiding the potential errors caused by inaccurate image processing results (e.g., boundary segmentation), as well as the classification bias resulting from a less robust feature set, as involved in most conventional CADx algorithms. Specifically, the stacked denoising auto-encoder (SDAE) is exploited on the two CADx applications for the differentiation of breast ultrasound lesions and lung CT nodules. The SDAE architecture is well equipped with the automatic feature exploration mechanism and noise tolerance advantage, and hence may be suitable to deal with the intrinsically noisy property of medical image data from various imaging modalities. To show the outperformance of SDAE-based CADx over the conventional scheme, two latest conventional CADx algorithms are implemented for comparison. 10 times of 10-fold cross-validations are conducted to illustrate the efficacy of the SDAE-based CADx algorithm. The experimental results show the significant performance boost by the SDAE-based CADx algorithm over the two conventional methods, suggesting that deep learning techniques can potentially change the design paradigm of the CADx systems without the need of explicit design and selection of problem-oriented features.

  9. Predictive validity of granulation tissue color measured by digital image analysis for deep pressure ulcer healing: a multicenter prospective cohort study.

    Science.gov (United States)

    Iizaka, Shinji; Kaitani, Toshiko; Sugama, Junko; Nakagami, Gojiro; Naito, Ayumi; Koyanagi, Hiroe; Konya, Chizuko; Sanada, Hiromi

    2013-01-01

    This multicenter prospective cohort study examined the predictive validity of granulation tissue color evaluated by digital image analysis for deep pressure ulcer healing. Ninety-one patients with deep pressure ulcers were followed for 3 weeks. From a wound photograph taken at baseline, an image representing the granulation red index (GRI) was processed in which a redder color represented higher values. We calculated the average GRI over granulation tissue and the proportion of pixels exceeding the threshold intensity of 80 for the granulation tissue surface (%GRI80) and wound surface (%wound red index 80). In the receiver operating characteristics curve analysis, most GRI parameters had adequate discriminative values for both improvement of the DESIGN-R total score and wound closure. Ulcers were categorized by the obtained cutoff points of the average GRI (≤80, >80), %GRI80 (≤55, >55-80, >80%), and %wound red index 80 (≤25, >25-50, >50%). In the linear mixed model, higher classes for all GRI parameters showed significantly greater relative improvement in overall wound severity during the 3 weeks after adjustment for patient characteristics and wound locations. Assessment of granulation tissue color by digital image analysis will be useful as an objective monitoring tool for granulation tissue quality or surrogate outcomes of pressure ulcer healing. © 2012 by the Wound Healing Society.

  10. Swept-source optical coherence tomography powered by a 1.3-μm vertical cavity surface emitting laser enables 2.3-mm-deep brain imaging in mice in vivo

    Science.gov (United States)

    Choi, Woo June; Wang, Ruikang K.

    2015-10-01

    We report noninvasive, in vivo optical imaging deep within a mouse brain by swept-source optical coherence tomography (SS-OCT), enabled by a 1.3-μm vertical cavity surface emitting laser (VCSEL). VCSEL SS-OCT offers a constant signal sensitivity of 105 dB throughout an entire depth of 4.25 mm in air, ensuring an extended usable imaging depth range of more than 2 mm in turbid biological tissue. Using this approach, we show deep brain imaging in mice with an open-skull cranial window preparation, revealing intact mouse brain anatomy from the superficial cerebral cortex to the deep hippocampus. VCSEL SS-OCT would be applicable to small animal studies for the investigation of deep tissue compartments in living brains where diseases such as dementia and tumor can take their toll.

  11. Exploring the complementarity of THz pulse imaging and DCE-MRIs: Toward a unified multi-channel classification and a deep learning framework.

    Science.gov (United States)

    Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S

    2016-12-01

    We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. CANDELS: THE COSMIC ASSEMBLY NEAR-INFRARED DEEP EXTRAGALACTIC LEGACY SURVEY—THE HUBBLE SPACE TELESCOPE OBSERVATIONS, IMAGING DATA PRODUCTS, AND MOSAICS

    International Nuclear Information System (INIS)

    Koekemoer, Anton M.; Ferguson, Henry C.; Grogin, Norman A.; Lotz, Jennifer M.; Lucas, Ray A.; Ogaz, Sara; Rajan, Abhijith; Casertano, Stefano; Dahlen, Tomas; Faber, S. M.; Kocevski, Dale D.; Koo, David C.; Lai, Kamson; McGrath, Elizabeth J.; Riess, Adam G.; Rodney, Steve A.; Dolch, Timothy; Strolger, Louis; Castellano, Marco; Dickinson, Mark

    2011-01-01

    This paper describes the Hubble Space Telescope imaging data products and data reduction procedures for the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS). This survey is designed to document the evolution of galaxies and black holes at z ≈ 1.5-8, and to study Type Ia supernovae at z > 1.5. Five premier multi-wavelength sky regions are selected, each with extensive multi-wavelength observations. The primary CANDELS data consist of imaging obtained in the Wide Field Camera 3 infrared channel (WFC3/IR) and the WFC3 ultraviolet/optical channel, along with the Advanced Camera for Surveys (ACS). The CANDELS/Deep survey covers ∼125 arcmin 2 within GOODS-N and GOODS-S, while the remainder consists of the CANDELS/Wide survey, achieving a total of ∼800 arcmin 2 across GOODS and three additional fields (Extended Groth Strip, COSMOS, and Ultra-Deep Survey). We summarize the observational aspects of the survey as motivated by the scientific goals and present a detailed description of the data reduction procedures and products from the survey. Our data reduction methods utilize the most up-to-date calibration files and image combination procedures. We have paid special attention to correcting a range of instrumental effects, including charge transfer efficiency degradation for ACS, removal of electronic bias-striping present in ACS data after Servicing Mission 4, and persistence effects and other artifacts in WFC3/IR. For each field, we release mosaics for individual epochs and eventual mosaics containing data from all epochs combined, to facilitate photometric variability studies and the deepest possible photometry. A more detailed overview of the science goals and observational design of the survey are presented in a companion paper.

  13. The HST/ACS Coma Cluster Survey. II. Data Description and Source Catalogs

    Science.gov (United States)

    Hammer, Derek; Kleijn, Gijs Verdoes; Hoyos, Carlos; Den Brok, Mark; Balcells, Marc; Ferguson, Henry C.; Goudfrooij, Paul; Carter, David; Guzman, Rafael; Peletier, Reynier F.; hide

    2010-01-01

    The Coma cluster, Abell 1656, was the target of a HST-ACS Treasury program designed for deep imaging in the F475W and F814W passbands. Although our survey was interrupted by the ACS instrument failure in early 2007, the partially-completed survey still covers approximately 50% of the core high density region in Coma. Observations were performed for twenty-five fields with a total coverage area of 274 aremin(sup 2), and extend over a wide range of cluster-centric radii (approximately 1.75 Mpe or 1 deg). The majority of the fields are located near the core region of Coma (19/25 pointings) with six additional fields in the south-west region of the cluster. In this paper we present SEXTRACTOR source catalogs generated from the processed images, including a detailed description of the methodology used for object detection and photometry, the subtraction of bright galaxies to measure faint underlying objects, and the use of simulations to assess the photometric accuracy and completeness of our catalogs. We also use simulations to perform aperture corrections for the SEXTRACTOR Kron magnitudes based only on the measured source flux and its half-light radius. We have performed photometry for 76,000 objects that consist of roughly equal numbers of extended galaxies and unresolved objects. Approximately two-thirds of all detections are brighter than F814W=26.5 mag (AB), which corresponds to the 10sigma, point-source detection limit. We estimate that Coma members are 5-10% of the source detections, including a large population of compact objects (primarily GCs, but also cEs and UCDs), and a wide variety of extended galaxies from cD galaxies to dwarf low surface brightness galaxies. The initial data release for the HST-ACS Coma Treasury program was made available to the public in August 2008. The images and catalogs described in this study relate to our second data release.

  14. Deep Learning Microscopy

    KAUST Repository

    Rivenson, Yair; Gorocs, Zoltan; Gunaydin, Harun; Zhang, Yibo; Wang, Hongda; Ozcan, Aydogan

    2017-01-01

    regular optical microscope, without any changes to its design. We blindly tested this deep learning approach using various tissue samples that are imaged with low-resolution and wide-field systems, where the network rapidly outputs an image with remarkably

  15. POX 186: A Dwarf Galaxy Under Construction?

    Science.gov (United States)

    Corbin, M. R.; Vacca, W. D.

    2000-12-01

    We have obtained deep images of the ultracompact ( ~ 3'') blue compact dwarf galaxy POX 186 in the F336W, F555W, and F814W filters of the Planetary Camera of the Hubble Space Telescope. We have additionally obtained a low-resolution near ultraviolet spectrum of the object with STIS and combine this with a ground-based spectrum covering the visible continuum and emission lines. Our images confirm this object to be highly compact, with a maximum projected size of only ~ 240 pc, making it one of the smallest galaxies known. We also confirm that the outer regions of the galaxy consist of an evolved stellar population, ruling out earlier speculations that POX 186 is a protogalaxy. However, the PC images reveal the galaxy to have a highly irregular morphology, with a pronounced tidal arm on its western side. This morphology is strongly suggestive of a recent collision between two smaller components which has in turn triggered the central starburst. The F336W image also shows that the material in this tidal stream is actively star forming. Given the very small ( ~ 100 pc) sizes of the colliding components, POX 186 may be a dwarf galaxy in the early stages of formation, which would be consistent with current ``downsizing'' models of galaxy formation in which the least massive objects are the last to form. This work is supported by NASA and the Space Telescope Science Institute.

  16. Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method.

    Science.gov (United States)

    Zhou, Xiangrong; Takayama, Ryosuke; Wang, Song; Hara, Takeshi; Fujita, Hiroshi

    2017-10-01

    We propose a single network trained by pixel-to-label deep learning to address the general issue of automatic multiple organ segmentation in three-dimensional (3D) computed tomography (CT) images. Our method can be described as a voxel-wise multiple-class classification scheme for automatically assigning labels to each pixel/voxel in a 2D/3D CT image. We simplify the segmentation algorithms of anatomical structures (including multiple organs) in a CT image (generally in 3D) to a majority voting scheme over the semantic segmentation of multiple 2D slices drawn from different viewpoints with redundancy. The proposed method inherits the spirit of fully convolutional networks (FCNs) that consist of "convolution" and "deconvolution" layers for 2D semantic image segmentation, and expands the core structure with 3D-2D-3D transformations to adapt to 3D CT image segmentation. All parameters in the proposed network are trained pixel-to-label from a small number of CT cases with human annotations as the ground truth. The proposed network naturally fulfills the requirements of multiple organ segmentations in CT cases of different sizes that cover arbitrary scan regions without any adjustment. The proposed network was trained and validated using the simultaneous segmentation of 19 anatomical structures in the human torso, including 17 major organs and two special regions (lumen and content inside of stomach). Some of these structures have never been reported in previous research on CT segmentation. A database consisting of 240 (95% for training and 5% for testing) 3D CT scans, together with their manually annotated ground-truth segmentations, was used in our experiments. The results show that the 19 structures of interest were segmented with acceptable accuracy (88.1% and 87.9% voxels in the training and testing datasets, respectively, were labeled correctly) against the ground truth. We propose a single network based on pixel-to-label deep learning to address the challenging

  17. a Coarse-To Model for Airplane Detection from Large Remote Sensing Images Using Saliency Modle and Deep Learning

    Science.gov (United States)

    Song, Z. N.; Sui, H. G.

    2018-04-01

    High resolution remote sensing images are bearing the important strategic information, especially finding some time-sensitive-targets quickly, like airplanes, ships, and cars. Most of time the problem firstly we face is how to rapidly judge whether a particular target is included in a large random remote sensing image, instead of detecting them on a given image. The problem of time-sensitive-targets target finding in a huge image is a great challenge: 1) Complex background leads to high loss and false alarms in tiny object detection in a large-scale images. 2) Unlike traditional image retrieval, what we need to do is not just compare the similarity of image blocks, but quickly find specific targets in a huge image. In this paper, taking the target of airplane as an example, presents an effective method for searching aircraft targets in large scale optical remote sensing images. Firstly, we used an improved visual attention model utilizes salience detection and line segment detector to quickly locate suspected regions in a large and complicated remote sensing image. Then for each region, without region proposal method, a single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation is adopted to search small airplane objects. Unlike sliding window and region proposal-based techniques, we can do entire image (region) during training and test time so it implicitly encodes contextual information about classes as well as their appearance. Experimental results show the proposed method is quickly identify airplanes in large-scale images.

  18. A COARSE-TO-FINE MODEL FOR AIRPLANE DETECTION FROM LARGE REMOTE SENSING IMAGES USING SALIENCY MODLE AND DEEP LEARNING

    Directory of Open Access Journals (Sweden)

    Z. N. Song

    2018-04-01

    Full Text Available High resolution remote sensing images are bearing the important strategic information, especially finding some time-sensitive-targets quickly, like airplanes, ships, and cars. Most of time the problem firstly we face is how to rapidly judge whether a particular target is included in a large random remote sensing image, instead of detecting them on a given image. The problem of time-sensitive-targets target finding in a huge image is a great challenge: 1 Complex background leads to high loss and false alarms in tiny object detection in a large-scale images. 2 Unlike traditional image retrieval, what we need to do is not just compare the similarity of image blocks, but quickly find specific targets in a huge image. In this paper, taking the target of airplane as an example, presents an effective method for searching aircraft targets in large scale optical remote sensing images. Firstly, we used an improved visual attention model utilizes salience detection and line segment detector to quickly locate suspected regions in a large and complicated remote sensing image. Then for each region, without region proposal method, a single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation is adopted to search small airplane objects. Unlike sliding window and region proposal-based techniques, we can do entire image (region during training and test time so it implicitly encodes contextual information about classes as well as their appearance. Experimental results show the proposed method is quickly identify airplanes in large-scale images.

  19. Automatic detection and segmentation of brain metastases on multimodal MR images with a deep convolutional neural network.

    Science.gov (United States)

    Charron, Odelin; Lallement, Alex; Jarnet, Delphine; Noblet, Vincent; Clavier, Jean-Baptiste; Meyer, Philippe

    2018-04-01

    Stereotactic treatments are today the reference techniques for the irradiation of brain metastases in radiotherapy. The dose per fraction is very high, and delivered in small volumes (diameter convolutional neural network (DeepMedic) to detect and segment brain metastases on MRI. At first, we sought to adapt the network parameters to brain metastases. We then explored the single or combined use of different MRI modalities, by evaluating network performance in terms of detection and segmentation. We also studied the interest of increasing the database with virtual patients or of using an additional database in which the active parts of the metastases are separated from the necrotic parts. Our results indicated that a deep network approach is promising for the detection and the segmentation of brain metastases on multimodal MRI. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. In vivo optical microprobe imaging for intracellular Ca2+ dynamics in response to dopaminergic signaling in deep brain evoked by cocaine

    Science.gov (United States)

    Luo, Zhongchi; Pan, Yingtian; Du, Congwu

    2012-02-01

    Ca2+ plays a vital role as second messenger in signal transduction and the intracellular Ca2+ ([Ca2+]i) change is an important indicator of neuronal activity in the brain, including both cortical and subcortical brain regions. Due to the highly scattering and absorption of brain tissue, it is challenging to optically access the deep brain regions (e.g., striatum at >3mm under the brain surface) and image [Ca2+]i changes with cellular resolutions. Here, we present two micro-probe approaches (i.e., microlens, and micro-prism) integrated with a fluorescence microscope modified to permit imaging of neuronal [Ca2+]i signaling in the striatum using a calcium indicator Rhod2(AM). While a micro-prism probe provides a larger field of view to image neuronal network from cortex to striatum, a microlens probe enables us to track [Ca2+]i dynamic change in individual neurons within the brain. Both techniques are validated by imaging neuronal [Ca2+]i changes in transgenic mice with dopamine receptors (D1R, D2R) expressing EGFP. Our results show that micro-prism images can map the distribution of D1R- and D2R-expressing neurons in various brain regions and characterize their different mean [Ca2+]i changes induced by an intervention (e.g., cocaine administration, 8mg/kg., i.p). In addition, microlens images can characterize the different [Ca2+]i dynamics of D1 and D2 neurons in response to cocaine, including new mechanisms of these two types of neurons in striatum. These findings highlight the power of the optical micro-probe imaging for dissecting the complex cellular and molecular insights of cocaine in vivo.

  1. VizieR Online Data Catalog: Galaxy candidates in the Hubble Frontier Fields (Laporte+, 2016)

    Science.gov (United States)

    Laporte, N.; Infante, L.; Troncoso Iribarren, P.; Zheng, W.; Molino, A.; Bauer, F. E.; Bina, D.; Broadhurst, T.; Chilingarian, I.; Garcia, S.; Kim, S.; Marques-Chaves, R.; Moustakas, J.; Pello, R.; Perez-Fournon, I.; Shu, X.; Streblyanska, A.; Zitrin, A.

    2018-02-01

    The Frontier Field (FF) project is carried out using HST Director's Discretionary Time and will use 840 orbits during Cycles 21, 22, and 23 with six strong-lensing galaxy clusters as the main targets. For each cluster, the final data set is composed of three images from ACS/HST (F435W, F606W, and F814W) and four images from WFC3/HST (F105W, F125W, F140W, and F160W) reaching depths of ~29 mag at 5σ in a 0.4" diameter aperture. In this study, we used the final data release on MACS J0717.5+3745 (z=0.551, Ebeling et al. 2004ApJ...609L..49E; Medezinski et al. 2013ApJ...777...43M) made public on 2015 April 1. This third cluster in the FF list has been observed by HST through several observing programs, mainly those related to CLASH (ID: 12103, PI: M. Postman) and the FFs (ID: 13498, PI: J. Lotz). We matched the HST data with deep Spitzer/IRAC images obtained from observations (ID: 90259) carried out from 2013 August to 2015 January combined with archival data from 2007 November to 2013 June. (6 data files).

  2. Transvaginal ultrasound vs magnetic resonance imaging for diagnosing deep infiltrating endometriosis: systematic review and meta-analysis.

    Science.gov (United States)

    Guerriero, S; Saba, L; Pascual, M A; Ajossa, S; Rodriguez, I; Mais, V; Alcazar, J L

    2018-05-01

    To perform a systematic review of studies comparing the accuracy of transvaginal ultrasound (TVS) and magnetic resonance imaging (MRI) in diagnosing deep infiltrating endometriosis (DIE) including only studies in which patients underwent both techniques. An extensive search was carried out in PubMed/MEDLINE and Web of Science for papers from January 1989 to October 2016 comparing TVS and MRI in DIE. Studies were considered eligible for inclusion if they reported on the use of TVS and MRI in the same set of patients for the preoperative detection of endometriosis in pelvic locations in women with clinical suspicion of DIE and using surgical data as a reference standard. Quality was assessed using the QUADAS-2 tool. A random-effects model was used to determine pooled sensitivity, specificity, positive and negative likelihood ratios (LR+ and LR-) and diagnostic odds ratio (DOR). Of 375 citations identified, six studies (n = 424) were considered eligible. For MRI in the detection of DIE in the rectosigmoid, pooled sensitivity was 0.85 (95% CI, 0.78-0.90), specificity was 0.95 (95% CI, 0.83-0.99), LR+ was 18.4 (95% CI, 4.7-72.4), LR- was 0.16 (95% CI, 0.11-0.24) and DOR was 116 (95% CI, 23-585). For TVS in the detection of DIE in the rectosigmoid, pooled sensitivity was 0.85 (95% CI, 0.68-0.94), specificity was 0.96 (95% CI, 0.85-0.99), LR+ was 20.4 (95% CI, 4.7-88.5), LR- was 0.16 (95% CI, 0.07-0.38) and DOR was 127 (95% CI, 14-1126). For MRI in the detection of DIE in the rectovaginal septum, pooled sensitivity was 0.66 (95% CI, 0.51-0.79), specificity was 0.97 (95% CI, 0.89-0.99), LR+ was 22.5 (95% CI, 6.7-76.2), LR- was 0.38 (95% CI, 0.23-0.52) and DOR was 65 (95% CI, 21-204). For TVS in the detection of DIE in the rectovaginal septum, pooled sensitivity was 0.59 (95% CI, 0.26-0.86), specificity was 0.97 (95% CI, 0.94-0.99), LR+ was 23.5 (95% CI, 9.1-60.5), LR- was 0.42 (95% CI, 0.18-0.97) and DOR was 56 (95% CI, 11-275). For MRI in the detection of DIE in the

  3. Deep structure of Pyrenees range (SW Europe) imaged by joint inversion of gravity and teleseismic delay time

    Science.gov (United States)

    Dufréchou, G.; Tiberi, C.; Martin, R.; Bonvalot, S.; Chevrot, S.; Seoane, L.

    2018-04-01

    We present a new model of the lithosphere and asthenosphere structure down to 300 km depth beneath the Pyrenees from the joint inversion of recent gravity and teleseismic data. Unlike previous studies, crustal correction were not applied on teleseismic data in order (i) to preserve the consistency between gravity data, which are mainly sensitive to the density structure of the crust.lithosphere, and travel time data, and (ii) to avoid the introduction of biases resulting from crustal reductions. The density model down to 100 km depth is preferentially used here to discuss the lithospheric structure of the Pyrenees, whereas the asthenospheric structure from 100 km to 300 km depth is discussed from our velocity model. The absence of a high density anomaly in our model between 30-100 km depth (except the Labourd density anomaly) in the northern part of the Pyrenees seems to preclude eclogitization of the subducted Iberian crust at the scale of the entire Pyrenean range. Local eclogitization of the deep Pyrenean crust beneath the western part of the Axial Zone (West of Andorra) associated with the positive Central density anomaly is proposed. The Pyrenean lithosphere in density and velocity models appears segmented from East to West. No clear relation between the along-strike segmentation and mapped major faults is visible in our models. The Pyrenees' lithosphere segments are associated to different seismicity pattern in the Pyrenees suggesting a possible relation between the deep structure of the Pyrenees and its seismicity in the upper crust. The concentration of earthquakes localized just straight up the Central density anomaly can result of the subsidence and/or delamination of an eclogitized Pyrenean deep root. The velocity model in the asthenosphere is similar to previous studies. The absence of a high-velocity anomaly in the upper mantle and transition zone (i.e. 125 to 225 km depth) seems to preclude the presence of a detached oceanic lithosphere beneath the

  4. Magnetic resonance direct thrombus imaging at 3 T field strength in patients with lower limb deep vein thrombosis: a feasibility study

    Energy Technology Data Exchange (ETDEWEB)

    Schmitz, S.A. [Imaging Sciences Department, Imperial College, Hammersmith Hospital, London (United Kingdom); O' Regan, D.P. [Imaging Sciences Department, Imperial College, Hammersmith Hospital, London (United Kingdom)]. E-mail: declan.oregan@imperial.ac.uk; Gibson, D. [Imaging Department, Hammersmith Hospitals NHS Trust, London (United Kingdom); Cunningham, C. [Imaging Department, Hammersmith Hospitals NHS Trust, London (United Kingdom); Fitzpatrick, J. [Imaging Sciences Department, Imperial College, Hammersmith Hospital, London (United Kingdom); Allsop, J. [Imaging Sciences Department, Imperial College, Hammersmith Hospital, London (United Kingdom); Larkman, D.J. [Imaging Sciences Department, Imperial College, Hammersmith Hospital, London (United Kingdom); Hajnal, J.V. [Imaging Sciences Department, Imperial College, Hammersmith Hospital, London (United Kingdom)

    2006-03-15

    AIM: To investigate the feasibility of imaging lower limb deep vein thrombosis using magnetic resonance imaging (MRI) at 3.0 T magnetic field strength with an optimized a T1 magnetization prepared rapid gradient echo technique (MP-RAGE) in patients with normal volunteers as controls. MATERIALS AND METHODS: Patients with deep vein thrombosis (n=4), thrombophlebitis (n=2) and healthy volunteers (n=9) were studied. MRI of the distal thigh and upper calf was performed at 3.0 T with MP-RAGE using two pre-pulses to suppress blood and fat (flip angle 15{sup o}, echo time 5 ms, and repetition time 10 ms). A qualitative analysis was performed for detection of thrombi and image quality. Contrast-to-noise ratios were determined in thrombosed and patent veins. RESULTS: Thrombi were clearly visible as high-signal intensity structures with good suppression of the anatomical background. A blinded reader accurately diagnosed 15 out of 16 cases. The contrast-to-noise ratio measurements showed a positive contrast of thrombus over background muscle 16.9 (SD 4.3, 95% CI: 12.5-21.3) and a negative contrast of the lumen to muscle in patent veins of normal volunteers -7.8 (SD 4.3, 95% CI: -11.1 to -4.5), with p=0.0015. CONCLUSION: Thrombi generate high signal intensity at 3.0 T allowing for their direct visualization if flowing blood, stationary blood and fat are sufficiently suppressed. This preliminary data supports the development of these techniques for other vascular applications.

  5. Development of an imaging system for in vivo real-time monitoring of neuronal activity in deep brain of free-moving rats.

    Science.gov (United States)

    Iijima, Norio; Miyamoto, Shinji; Matsumoto, Keisuke; Takumi, Ken; Ueta, Yoichi; Ozawa, Hitoshi

    2017-09-01

    We have newly developed a system that allows monitoring of the intensity of fluorescent signals from deep brains of rats transgenically modified to express enhanced green fluorescent protein (eGFP) via an optical fiber. One terminal of the optical fiber was connected to a blue semiconductor laser oscillator/green fluorescence detector. The other terminal was inserted into the vicinity of the eGFP-expressing neurons. Since the optical fiber was vulnerable to twisting stresses caused by animal movement, we also developed a cage in which the floor automatically turns, in response to the turning of the rat's head. This relieved the twisting stress on the optical fiber. The system then enabled real-time monitoring of fluorescence in awake and unrestrained rats over many hours. Using this system, we could continuously monitor eGFP-expression in arginine vasopressin-eGFP transgenic rats. Moreover, we observed an increase of eGFP-expression in the paraventricular nucleus under salt-loading conditions. We then performed in vivo imaging of eGFP-expressing GnRH neurons in the hypothalamus, via a bundle consisting of 3000 thin optical fibers. With the combination of the optical fiber bundle connection to the fluorescence microscope, and the special cage system, we were able to capture and retain images of eGFP-expressing neurons from free-moving rats. We believe that our newly developed method for monitoring and imaging eGFP-expression in deep brain neurons will be useful for analysis of neuronal functions in awake and unrestrained animals for long durations.

  6. Optical clearing and fluorescence deep-tissue imaging for 3D quantitative analysis of the brain tumor microenvironment

    NARCIS (Netherlands)

    Lagerweij, Tonny; Dusoswa, Sophie A.; Negrean, Adrian; Hendrikx, Esther M.L.; de Vries, Helga E.; Kole, Jeroen; Garcia-Vallejo, Juan J.; Mansvelder, Huibert D; Vandertop, W. Peter; Noske, David P.; Tannous, Bakhos A.; Musters, René J P; van Kooyk, Yvette; Wesseling, Pieter; Zhao, Xi Wen; Wurdinger, Thomas

    2017-01-01

    Background: Three-dimensional visualization of the brain vasculature and its interactions with surrounding cells may shed light on diseases where aberrant microvascular organization is involved, including glioblastoma (GBM). Intravital confocal imaging allows 3D visualization of microvascular

  7. Optical clearing and fluorescence deep-tissue imaging for 3D quantitative analysis of the brain tumor microenvironment

    NARCIS (Netherlands)

    Lagerweij, Tonny; Dusoswa, Sophie A.; Negrean, Adrian; Hendrikx, Esther M. L.; de Vries, Helga E.; Kole, Jeroen; Garcia-Vallejo, Juan J.; Mansvelder, Huibert D.; Vandertop, W. Peter; Noske, David P.; Tannous, Bakhos A.; Musters, René J. P.; van Kooyk, Yvette; Wesseling, Pieter; Zhao, Xi Wen; Wurdinger, Thomas

    2017-01-01

    Three-dimensional visualization of the brain vasculature and its interactions with surrounding cells may shed light on diseases where aberrant microvascular organization is involved, including glioblastoma (GBM). Intravital confocal imaging allows 3D visualization of microvascular structures and

  8. Deep Space Thermal Cycle Testing of Advanced X-Ray Astrophysics Facility - Imaging (AXAF-I) Solar Array Panels Test

    National Research Council Canada - National Science Library

    Sisco, Jimmy

    1997-01-01

    The NASA Advanced X-ray Astrophysics Facility - Imaging (AXAF-I) satellite will be exposed to thermal conditions beyond normal experience flight temperatures due to the satellite's high elliptical orbital flight...

  9. Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images From Multiethnic Populations With Diabetes.

    Science.gov (United States)

    Ting, Daniel Shu Wei; Cheung, Carol Yim-Lui; Lim, Gilbert; Tan, Gavin Siew Wei; Quang, Nguyen D; Gan, Alfred; Hamzah, Haslina; Garcia-Franco, Renata; San Yeo, Ian Yew; Lee, Shu Yen; Wong, Edmund Yick Mun; Sabanayagam, Charumathi; Baskaran, Mani; Ibrahim, Farah; Tan, Ngiap Chuan; Finkelstein, Eric A; Lamoureux, Ecosse L; Wong, Ian Y; Bressler, Neil M; Sivaprasad, Sobha; Varma, Rohit; Jonas, Jost B; He, Ming Guang; Cheng, Ching-Yu; Cheung, Gemmy Chui Ming; Aung, Tin; Hsu, Wynne; Lee, Mong Li; Wong, Tien Yin

    2017-12-12

    A deep learning system (DLS) is a machine learning technology with potential for screening diabetic retinopathy and related eye diseases. To evaluate the performance of a DLS in detecting referable diabetic retinopathy, vision-threatening diabetic retinopathy, possible glaucoma, and age-related macular degeneration (AMD) in community and clinic-based multiethnic populations with diabetes. Diagnostic performance of a DLS for diabetic retinopathy and related eye diseases was evaluated using 494 661 retinal images. A DLS was trained for detecting diabetic retinopathy (using 76 370 images), possible glaucoma (125 189 images), and AMD (72 610 images), and performance of DLS was evaluated for detecting diabetic retinopathy (using 112 648 images), possible glaucoma (71 896 images), and AMD (35 948 images). Training of the DLS was completed in May 2016, and validation of the DLS was completed in May 2017 for detection of referable diabetic retinopathy (moderate nonproliferative diabetic retinopathy or worse) and vision-threatening diabetic retinopathy (severe nonproliferative diabetic retinopathy or worse) using a primary validation data set in the Singapore National Diabetic Retinopathy Screening Program and 10 multiethnic cohorts with diabetes. Use of a deep learning system. Area under the receiver operating characteristic curve (AUC) and sensitivity and specificity of the DLS with professional graders (retinal specialists, general ophthalmologists, trained graders, or optometrists) as the reference standard. In the primary validation dataset (n = 14 880 patients; 71 896 images; mean [SD] age, 60.2 [2.2] years; 54.6% men), the prevalence of referable diabetic retinopathy was 3.0%; vision-threatening diabetic retinopathy, 0.6%; possible glaucoma, 0.1%; and AMD, 2.5%. The AUC of the DLS for referable diabetic retinopathy was 0.936 (95% CI, 0.925-0.943), sensitivity was 90.5% (95% CI, 87.3%-93.0%), and specificity was 91.6% (95% CI, 91.0%-92.2%). For

  10. A Novel Morphometry-Based Protocol of Automated Video-Image Analysis for Species Recognition and Activity Rhythms Monitoring in Deep-Sea Fauna

    Directory of Open Access Journals (Sweden)

    Paolo Menesatti

    2009-10-01

    Full Text Available The understanding of ecosystem dynamics in deep-sea areas is to date limited by technical constraints on sampling repetition. We have elaborated a morphometry-based protocol for automated video-image analysis where animal movement tracking (by frame subtraction is accompanied by species identification from animals’ outlines by Fourier Descriptors and Standard K-Nearest Neighbours methods. One-week footage from a permanent video-station located at 1,100 m depth in Sagami Bay (Central Japan was analysed. Out of 150,000 frames (1 per 4 s, a subset of 10.000 was analyzed by a trained operator to increase the efficiency of the automated procedure. Error estimation of the automated and trained operator procedure was computed as a measure of protocol performance. Three displacing species were identified as the most recurrent: Zoarcid fishes (eelpouts, red crabs (Paralomis multispina, and snails (Buccinum soyomaruae. Species identification with KNN thresholding produced better results in automated motion detection. Results were discussed assuming that the technological bottleneck is to date deeply conditioning the exploration of the deep-sea.

  11. CHEERS Results from NGC 3393. II. Investigating the Extended Narrow-line Region Using Deep Chandra Observations and Hubble Space Telescope Narrow-line Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Maksym, W. Peter; Fabbiano, Giuseppina; Elvis, Martin; Karovska, Margarita; Paggi, Alessandro; Raymond, John [Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138 (United States); Wang, Junfeng [Department of Astronomy, Physics Building, Xiamen University Xiamen, Fujian, 361005 (China); Storchi-Bergmann, Thaisa, E-mail: walter.maksym@cfa.harvard.edu [Departamento de Astronomia, Universidade Federal do Rio Grande do Sul, IF, CP 15051, 91501-970 Porto Alegre, RS (Brazil)

    2017-07-20

    The CHandra Extended Emission Line Region Survey (CHEERS) is an X-ray study of nearby active galactic nuclei (AGNs) designed to take full advantage of Chandra 's unique angular resolution by spatially resolving feedback signatures and effects. In the second paper of a series on CHEERS target NGC 3393, we examine deep high-resolution Chandra images and compare them with Hubble Space Telescope narrow-line images of [O iii], [S ii], and H α , as well as previously unpublished mid-ultraviolet (MUV) images. The X-rays provide unprecedented evidence that the S-shaped arms that envelope the nuclear radio outflows extend only ≲0.″2 (≲50 pc) across. The high-resolution multiwavelength data suggest that the extended narrow-line region is a complex multiphase structure in the circumnuclear interstellar medium (ISM). Its ionization structure is highly stratified with respect to outflow-driven bubbles in the bicone and varies dramatically on scales of ∼10 pc. Multiple findings show likely contributions from shocks to the feedback in regions where radio outflows from the AGN most directly influence the ISM. These findings include H α evidence for gas compression and extended MUV emission and are in agreement with existing STIS kinematics. Extended filamentary structure in the X-rays and optical suggests the presence of an undetected plasma component, whose existence could be tested with deeper radio observations.

  12. Combined MR direct thrombus imaging and non-contrast magnetic resonance venography reveal the evolution of deep vein thrombosis: a feasibility study

    Energy Technology Data Exchange (ETDEWEB)

    Mendichovszky, I.A.; Lomas, D.J. [Addenbrooke' s Hospital, Department of Radiology, Cambridge (United Kingdom); University of Cambridge, Department of Radiology, Cambridge (United Kingdom); Priest, A.N.; Bowden, D.J.; Hunter, S.; Joubert, I.; Hilborne, S.; Graves, M.J. [Addenbrooke' s Hospital, Department of Radiology, Cambridge (United Kingdom); Baglin, T. [Addenbrooke' s Hospital, Department of Haematology, Cambridge (United Kingdom)

    2017-06-15

    Lower limb deep venous thrombosis (DVT) is a common condition with high morbidity and mortality. The aim of the study was to investigate the temporal evolution of the acute thrombus by magnetic resonance imaging (MRI) and its relationship to venous recanalization in patients with recurrent DVTs. Thirteen patients with newly diagnosed lower limb DVTs underwent MRI with non-contrast MR venography (NC-MRV) and MR direct thrombus imaging (MR-DTI), an inversion-recovery water-selective fast gradient-echo acquisition. Imaging was performed within 7 days of the acute thrombotic event, then at 3 and 6 months. By 3 months from the thrombotic event a third of the thrombi had resolved and by 6 months about half of the cases had resolved on the basis of vein recanalisation using NC-MRV. On the initial MR-DTI acute thrombus was clearly depicted by hyperintense signal, while the remaining thrombi were predominantly low signal at 3 and 6 months. Some residual thrombi contained small and fragmented persisting hyperintense areas at 3 months, clearing almost completely by 6 months. Our study suggests that synergistic venous assessment with combined NC-MRV and MR-DTI is able to distinguish acute venous thrombosis from the established (old) or evolving DVT detected by ultrasound. (orig.)

  13. CHEERS Results from NGC 3393. II. Investigating the Extended Narrow-line Region Using Deep Chandra Observations and Hubble Space Telescope Narrow-line Imaging

    Science.gov (United States)

    Maksym, W. Peter; Fabbiano, Giuseppina; Elvis, Martin; Karovska, Margarita; Paggi, Alessandro; Raymond, John; Wang, Junfeng; Storchi-Bergmann, Thaisa

    2017-07-01

    The CHandra Extended Emission Line Region Survey (CHEERS) is an X-ray study of nearby active galactic nuclei (AGNs) designed to take full advantage of Chandra's unique angular resolution by spatially resolving feedback signatures and effects. In the second paper of a series on CHEERS target NGC 3393, we examine deep high-resolution Chandra images and compare them with Hubble Space Telescope narrow-line images of [O III], [S II], and Hα, as well as previously unpublished mid-ultraviolet (MUV) images. The X-rays provide unprecedented evidence that the S-shaped arms that envelope the nuclear radio outflows extend only ≲0.″2 (≲50 pc) across. The high-resolution multiwavelength data suggest that the extended narrow-line region is a complex multiphase structure in the circumnuclear interstellar medium (ISM). Its ionization structure is highly stratified with respect to outflow-driven bubbles in the bicone and varies dramatically on scales of ˜10 pc. Multiple findings show likely contributions from shocks to the feedback in regions where radio outflows from the AGN most directly influence the ISM. These findings include Hα evidence for gas compression and extended MUV emission and are in agreement with existing STIS kinematics. Extended filamentary structure in the X-rays and optical suggests the presence of an undetected plasma component, whose existence could be tested with deeper radio observations.

  14. CHEERS Results from NGC 3393. II. Investigating the Extended Narrow-line Region Using Deep Chandra Observations and Hubble Space Telescope Narrow-line Imaging

    International Nuclear Information System (INIS)

    Maksym, W. Peter; Fabbiano, Giuseppina; Elvis, Martin; Karovska, Margarita; Paggi, Alessandro; Raymond, John; Wang, Junfeng; Storchi-Bergmann, Thaisa

    2017-01-01

    The CHandra Extended Emission Line Region Survey (CHEERS) is an X-ray study of nearby active galactic nuclei (AGNs) designed to take full advantage of Chandra 's unique angular resolution by spatially resolving feedback signatures and effects. In the second paper of a series on CHEERS target NGC 3393, we examine deep high-resolution Chandra images and compare them with Hubble Space Telescope narrow-line images of [O iii], [S ii], and H α , as well as previously unpublished mid-ultraviolet (MUV) images. The X-rays provide unprecedented evidence that the S-shaped arms that envelope the nuclear radio outflows extend only ≲0.″2 (≲50 pc) across. The high-resolution multiwavelength data suggest that the extended narrow-line region is a complex multiphase structure in the circumnuclear interstellar medium (ISM). Its ionization structure is highly stratified with respect to outflow-driven bubbles in the bicone and varies dramatically on scales of ∼10 pc. Multiple findings show likely contributions from shocks to the feedback in regions where radio outflows from the AGN most directly influence the ISM. These findings include H α evidence for gas compression and extended MUV emission and are in agreement with existing STIS kinematics. Extended filamentary structure in the X-rays and optical suggests the presence of an undetected plasma component, whose existence could be tested with deeper radio observations.

  15. Combined MR direct thrombus imaging and non-contrast magnetic resonance venography reveal the evolution of deep vein thrombosis: a feasibility study

    International Nuclear Information System (INIS)

    Mendichovszky, I.A.; Lomas, D.J.; Priest, A.N.; Bowden, D.J.; Hunter, S.; Joubert, I.; Hilborne, S.; Graves, M.J.; Baglin, T.

    2017-01-01

    Lower limb deep venous thrombosis (DVT) is a common condition with high morbidity and mortality. The aim of the study was to investigate the temporal evolution of the acute thrombus by magnetic resonance imaging (MRI) and its relationship to venous recanalization in patients with recurrent DVTs. Thirteen patients with newly diagnosed lower limb DVTs underwent MRI with non-contrast MR venography (NC-MRV) and MR direct thrombus imaging (MR-DTI), an inversion-recovery water-selective fast gradient-echo acquisition. Imaging was performed within 7 days of the acute thrombotic event, then at 3 and 6 months. By 3 months from the thrombotic event a third of the thrombi had resolved and by 6 months about half of the cases had resolved on the basis of vein recanalisation using NC-MRV. On the initial MR-DTI acute thrombus was clearly depicted by hyperintense signal, while the remaining thrombi were predominantly low signal at 3 and 6 months. Some residual thrombi contained small and fragmented persisting hyperintense areas at 3 months, clearing almost completely by 6 months. Our study suggests that synergistic venous assessment with combined NC-MRV and MR-DTI is able to distinguish acute venous thrombosis from the established (old) or evolving DVT detected by ultrasound. (orig.)

  16. A study of the stellar population in the Lynds 1641 dark cloud - deep near-infrared imaging

    International Nuclear Information System (INIS)

    Strom, K.M.; Margulis, M.; Strom, S.E.

    1989-01-01

    Deep H and K photometry of a selection of IRAS point sources in the L1641 cloud is presented. Using these data in combination with IRAS data and previously published near-infrared photometry for sources in this region, it is found that the L1641 cloud contains newly born stars embedded within cores of unusually large visual extinction. A comparison of the properties of cores in L1641 with those in the Taurus-Auriga star-forming complex reveals that L1641 contains cores with higher visual extinctions, larger ammonia (J, K) = (1, 1) line widths, greater kinetic temperatures, and probably higher optical depths at 100 microns than any cores in Taurus-Auriga. These results are qualitatively consistent with recent suggestions that the process of protostellar collapse in cores in the L1641 cloud is dominated by gravity while this process is dominated by magnetic fields in Taurus-Auriga. 20 refs

  17. Identification of old tidal dwarfs near early-type galaxies from deep imaging and H I observations

    Science.gov (United States)

    Duc, Pierre-Alain; Paudel, Sanjaya; McDermid, Richard M.; Cuillandre, Jean-Charles; Serra, Paolo; Bournaud, Frédéric; Cappellari, Michele; Emsellem, Eric

    2014-05-01

    It has recently been proposed that the dwarf spheroidal galaxies located in the Local Group discs of satellites (DoSs) may be tidal dwarf galaxies (TDGs) born in a major merger at least 5 Gyr ago. Whether TDGs can live that long is still poorly constrained by observations. As part of deep optical and H I surveys with the Canada-France-Hawaii Telescope (CFHT) MegaCam camera and Westerbork Synthesis Radio Telescope made within the ATLAS3D project, and follow-up spectroscopic observations with the Gemini-North telescope, we have discovered old TDG candidates around several early-type galaxies. At least one of them has an oxygen abundance close to solar, as expected for a tidal origin. This confirmed pre-enriched object is located within the gigantic, but very low surface brightness, tidal tail that emanates from the elliptical galaxy, NGC 5557. An age of 4 Gyr estimated from its SED fitting makes it the oldest securely identified TDG ever found so far. We investigated the structural and gaseous properties of the TDG and of a companion located in the same collisional debris, and thus most likely of tidal origin as well. Despite several Gyr of evolution close to their parent galaxies, they kept a large gas reservoir. Their central surface brightness is low and their effective radius much larger than that of typical dwarf galaxies of the same mass. This possibly provides us with criteria to identify tidal objects which can be more easily checked than the traditional ones requiring deep spectroscopic observations. In view of the above, we discuss the survival time of TDGs and question the tidal origin of the DoSs.

  18. THE DEEP BLUE COLOR OF HD 189733b: ALBEDO MEASUREMENTS WITH HUBBLE SPACE TELESCOPE/SPACE TELESCOPE IMAGING SPECTROGRAPH AT VISIBLE WAVELENGTHS

    Energy Technology Data Exchange (ETDEWEB)

    Evans, Thomas M.; Aigrain, Suzanne; Barstow, Joanna K. [Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH (United Kingdom); Pont, Frederic; Sing, David K. [School of Physics, University of Exeter, EX4 4QL Exeter (United Kingdom); Desert, Jean-Michel; Knutson, Heather A. [Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125 (United States); Gibson, Neale [European Southern Observatory, Karl-Schwarzschild-Strasse 2, D-85748 Garching (Germany); Heng, Kevin [University of Bern, Center for Space and Habitability, Sidlerstrasse 5, CH-3012 Bern (Switzerland); Lecavelier des Etangs, Alain, E-mail: tom.evans@astro.ox.ac.uk [Institut d' Astrophysique de Paris, UMR7095 CNRS, Universite Pierre et Marie Curie, 98 bis Boulevard Arago, F-75014 Paris (France)

    2013-08-01

    We present a secondary eclipse observation for the hot Jupiter HD 189733b across the wavelength range 290-570 nm made using the Space Telescope Imaging Spectrograph on the Hubble Space Telescope. We measure geometric albedos of A{sub g} = 0.40 {+-} 0.12 across 290-450 nm and A{sub g} < 0.12 across 450-570 nm at 1{sigma} confidence. The albedo decrease toward longer wavelengths is also apparent when using six wavelength bins over the same wavelength range. This can be interpreted as evidence for optically thick reflective clouds on the dayside hemisphere with sodium absorption suppressing the scattered light signal beyond {approx}450 nm. Our best-fit albedo values imply that HD 189733b would appear a deep blue color at visible wavelengths.

  19. SELECTION OF BURST-LIKE TRANSIENTS AND STOCHASTIC VARIABLES USING MULTI-BAND IMAGE DIFFERENCING IN THE PAN-STARRS1 MEDIUM-DEEP SURVEY

    International Nuclear Information System (INIS)

    Kumar, S.; Gezari, S.; Heinis, S.; Chornock, R.; Berger, E.; Soderberg, A.; Stubbs, C. W.; Kirshner, R. P.; Rest, A.; Huber, M. E.; Narayan, G.; Marion, G. H.; Burgett, W. S.; Foley, R. J.; Scolnic, D.; Riess, A. G.; Lawrence, A.; Smartt, S. J.; Smith, K.; Wood-Vasey, W. M.

    2015-01-01

    We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands g P1 , r P1 , i P1 , and z P1 . We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and an analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to

  20. SELECTION OF BURST-LIKE TRANSIENTS AND STOCHASTIC VARIABLES USING MULTI-BAND IMAGE DIFFERENCING IN THE PAN-STARRS1 MEDIUM-DEEP SURVEY

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, S.; Gezari, S.; Heinis, S. [Department of Astronomy, University of Maryland, Stadium Drive, College Park, MD 21224 (United States); Chornock, R.; Berger, E.; Soderberg, A.; Stubbs, C. W.; Kirshner, R. P. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Rest, A. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Huber, M. E.; Narayan, G.; Marion, G. H.; Burgett, W. S. [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States); Foley, R. J. [Astronomy Department, University of Illinois at Urbana-Champaign, 1002 West Green Street, Urbana, IL 61801 (United States); Scolnic, D.; Riess, A. G. [Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Lawrence, A. [Institute for Astronomy, University of Edinburgh Scottish Universities Physics Alliance, Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ (United Kingdom); Smartt, S. J.; Smith, K. [Astrophysics Research Centre, School of Mathematics and Physics, Queen' s University Belfast, Belfast BT7 1NN (United Kingdom); Wood-Vasey, W. M. [Pittsburgh Particle Physics, Astrophysics, and Cosmology Center, Department of Physics and Astronomy, University of Pittsburgh, 3941 O' Hara Street, Pittsburgh, PA 15260 (United States); and others

    2015-03-20

    We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands g {sub P1}, r {sub P1}, i {sub P1}, and z {sub P1}. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and an analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host

  1. Deep learning in bioinformatics.

    Science.gov (United States)

    Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh

    2017-09-01

    In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Exploring the Hidden Structure of Astronomical Images: A "Pixelated" View of Solar System and Deep Space Features!

    Science.gov (United States)

    Ward, R. Bruce; Sienkiewicz, Frank; Sadler, Philip; Antonucci, Paul; Miller, Jaimie

    2013-01-01

    We describe activities created to help student participants in Project ITEAMS (Innovative Technology-Enabled Astronomy for Middle Schools) develop a deeper understanding of picture elements (pixels), image creation, and analysis of the recorded data. ITEAMS is an out-of-school time (OST) program funded by the National Science Foundation (NSF) with…

  3. Luciola Hypertelescope Space Observatory. Versatile, Upgradable High-Resolution Imaging,from Stars to Deep-Field Cosmology

    Science.gov (United States)

    Labeyrie, Antoine; Le Coroller, Herve; Dejonghe, Julien; Lardiere, Olivier; Aime, Claude; Dohlen, Kjetil; Mourard, Denis; Lyon, Richard; Carpenter, Kenneth G.

    2008-01-01

    Luciola is a large (one kilometer) "multi-aperture densified-pupil imaging interferometer", or "hypertelescope" employing many small apertures, rather than a few large ones, for obtaining direct snapshot images with a high information content. A diluted collector mirror, deployed in space as a flotilla of small mirrors, focuses a sky image which is exploited by several beam-combiner spaceships. Each contains a pupil densifier micro-lens array to avoid the diffractive spread and image attenuation caused by the small sub-apertures. The elucidation of hypertelescope imaging properties during the last decade has shown that many small apertures tend to be far more efficient, regarding the science yield, than a few large ones providing a comparable collecting area. For similar underlying physical reasons, radio-astronomy has also evolved in the direction of many-antenna systems such as the proposed Low Frequency Array having hundreds of thousands of individual receivers . With its high limiting magnitude, reaching the mv=30 limit of HST when 100 collectors of 25cm will match its collecting area, high-resolution direct imaging in multiple channels, broad spectral coverage from the 1200 Angstrom ultra-violet to the 20 micron infra-red, apodization, coronagraphic and spectroscopic capabilities, the proposed hypertelescope observatory addresses very broad and innovative science covering different areas of ESA s Cosmic Vision program. In the initial phase, a focal spacecraft covering the UV to near IR spectral range of EMCCD photon-counting cameras ( currently 200 to 1000nm), will image details on the surface of many stars, as well as their environment, including multiple stars and clusters. Spectra will be obtained for each resel. It will also image neutron star, black-hole and micro-quasar candidates, as well as active galactic nuclei, quasars, gravitational lenses, and other Cosmic Vision targets observable with the initial modest crowding limit. With subsequent upgrade

  4. Disaster damage detection through synergistic use of deep learning and 3D point cloud features derived from very high resolution oblique aerial images, and multiple-kernel-learning

    Science.gov (United States)

    Vetrivel, Anand; Gerke, Markus; Kerle, Norman; Nex, Francesco; Vosselman, George

    2018-06-01

    Oblique aerial images offer views of both building roofs and façades, and thus have been recognized as a potential source to detect severe building damages caused by destructive disaster events such as earthquakes. Therefore, they represent an important source of information for first responders or other stakeholders involved in the post-disaster response process. Several automated methods based on supervised learning have already been demonstrated for damage detection using oblique airborne images. However, they often do not generalize well when data from new unseen sites need to be processed, hampering their practical use. Reasons for this limitation include image and scene characteristics, though the most prominent one relates to the image features being used for training the classifier. Recently features based on deep learning approaches, such as convolutional neural networks (CNNs), have been shown to be more effective than conventional hand-crafted features, and have become the state-of-the-art in many domains, including remote sensing. Moreover, often oblique images are captured with high block overlap, facilitating the generation of dense 3D point clouds - an ideal source to derive geometric characteristics. We hypothesized that the use of CNN features, either independently or in combination with 3D point cloud features, would yield improved performance in damage detection. To this end we used CNN and 3D features, both independently and in combination, using images from manned and unmanned aerial platforms over several geographic locations that vary significantly in terms of image and scene characteristics. A multiple-kernel-learning framework, an effective way for integrating features from different modalities, was used for combining the two sets of features for classification. The results are encouraging: while CNN features produced an average classification accuracy of about 91%, the integration of 3D point cloud features led to an additional

  5. Fully automatic detection and segmentation of abdominal aortic thrombus in post-operative CTA images using Deep Convolutional Neural Networks.

    Science.gov (United States)

    López-Linares, Karen; Aranjuelo, Nerea; Kabongo, Luis; Maclair, Gregory; Lete, Nerea; Ceresa, Mario; García-Familiar, Ainhoa; Macía, Iván; González Ballester, Miguel A

    2018-05-01

    Computerized Tomography Angiography (CTA) based follow-up of Abdominal Aortic Aneurysms (AAA) treated with Endovascular Aneurysm Repair (EVAR) is essential to evaluate the progress of the patient and detect complications. In this context, accurate quantification of post-operative thrombus volume is required. However, a proper evaluation is hindered by the lack of automatic, robust and reproducible thrombus segmentation algorithms. We propose a new fully automatic approach based on Deep Convolutional Neural Networks (DCNN) for robust and reproducible thrombus region of interest detection and subsequent fine thrombus segmentation. The DetecNet detection network is adapted to perform region of interest extraction from a complete CTA and a new segmentation network architecture, based on Fully Convolutional Networks and a Holistically-Nested Edge Detection Network, is presented. These networks are trained, validated and tested in 13 post-operative CTA volumes of different patients using a 4-fold cross-validation approach to provide more robustness to the results. Our pipeline achieves a Dice score of more than 82% for post-operative thrombus segmentation and provides a mean relative volume difference between ground truth and automatic segmentation that lays within the experienced human observer variance without the need of human intervention in most common cases. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Imaging the deep structures of the convergent plates along the Ecuadorian subduction zone through receiver function analysis

    Science.gov (United States)

    Galve, A.; Charvis, P.; Regnier, M. M.; Font, Y.; Nocquet, J. M.; Segovia, M.

    2017-12-01

    The Ecuadorian subduction zone was affected by several large M>7.5 earthquakes. While we have low resolution on the 1942, 1958 earthquakes rupture zones extension, the 2016 Pedernales earthquake, that occurs at the same location than the 1942 earthquake, give strong constraints on the deep limit of the seismogenic zone. This downdip limit is caused by the onset of plasticity at a critical temperature (> 350-450 °C for crustal materials, or serpentinized mantle wedge, and eventually > 700 °C for dry mantle). However we still don't know exactly where is the upper plate Moho and therefore what controls the downdip limit of Ecuadorian large earthquakes seismogenic zone. For several years Géoazur and IG-EPN have maintained permanent and temporary networks (ADN and JUAN projects) along the margin to register the subduction zone seismological activity. Although Ecuador is not a good place to perform receiver function due to its position with respect to the worldwide teleseismic sources, the very long time deployment compensate this issue. We performed a frequency dependent receiver function analysis to derive (1) the thickness of the downgoing plate, (2) the interplate depth and (3) the upper plate Moho. These constraints give the frame to interpretation on the seismogenic zone of the 2016 Pedernales earthquake.

  7. ULTRA-COMPACT DWARFS IN THE CORE OF THE COMA CLUSTER

    International Nuclear Information System (INIS)

    Madrid, Juan P.; Graham, Alister W.; Forbes, Duncan A.; Spitler, Lee R.; Harris, William E.; Goudfrooij, Paul; Ferguson, Henry C.; Carter, David; Blakeslee, John P.

    2010-01-01

    We have discovered both a red and a blue subpopulation of ultra-compact dwarf (UCD) galaxy candidates in the Coma galaxy cluster. We analyzed deep F475W (Sloan g) and F814W (I) Hubble Space Telescope images obtained with the Advanced Camera for Surveys Wide Field Channel as part of the Coma Cluster Treasury Survey and have fitted the light profiles of ∼5000 point-like sources in the vicinity of NGC 4874, one of the two central dominant galaxies of the Coma Cluster. Although almost all of these sources are globular clusters that remain unresolved, we found that 52 objects have effective radii between ∼10 and 66 pc, in the range spanned by dwarf globular transition objects (DGTOs) and UCDs. Of these 52 compact objects, 25 are brighter than M V ∼ -11 mag, a magnitude conventionally thought to separate UCDs and globular clusters. The UCD/DGTO candidates have the same color and luminosity distribution as the most luminous globular clusters within the red and blue subpopulations of the immensely rich NGC 4874 globular cluster system. Unlike standard globular clusters, blue and red UCD/DGTO subpopulations have the same median effective radius. The spatial distribution of UCD/DGTO candidates reveals that they congregate toward NGC 4874 and are not uniformly distributed. We find a relative deficit of UCD/DGTOs compared with globular clusters in the inner 15 kpc around NGC 4874; however, at larger radii UCD/DGTO and globular clusters follow the same spatial distribution.

  8. The Far-Field Hubble Constant

    Science.gov (United States)

    Lauer, Tod

    1995-07-01

    We request deep, near-IR (F814W) WFPC2 images of five nearby Brightest Cluster Galaxies (BCG) to calibrate the BCG Hubble diagram by the Surface Brightness Fluctuation (SBF) method. Lauer & Postman (1992) show that the BCG Hubble diagram measured out to 15,000 km s^-1 is highly linear. Calibration of the Hubble diagram zeropoint by SBF will thus yield an accurate far-field measure of H_0 based on the entire volume within 15,000 km s^-1, thus circumventing any strong biases caused by local peculiar velocity fields. This method of reaching the far field is contrasted with those using distance ratios between Virgo and Coma, or any other limited sample of clusters. HST is required as the ground-based SBF method is limited to team developed the SBF method, the first BCG Hubble diagram based on a full-sky, volume-limited BCG sample, played major roles in the calibration of WFPC and WFPC2, and are conducting observations of local galaxies that will validate the SBF zeropoint (through GTO programs). This work uses the SBF method to tie both the Cepheid and Local Group giant-branch distances generated by HST to the large scale Hubble flow, which is most accurately traced by BCGs.

  9. Ship Detection in Gaofen-3 SAR Images Based on Sea Clutter Distribution Analysis and Deep Convolutional Neural Network.

    Science.gov (United States)

    An, Quanzhi; Pan, Zongxu; You, Hongjian

    2018-01-24

    Target detection is one of the important applications in the field of remote sensing. The Gaofen-3 (GF-3) Synthetic Aperture Radar (SAR) satellite launched by China is a powerful tool for maritime monitoring. This work aims at detecting ships in GF-3 SAR images using a new land masking strategy, the appropriate model for sea clutter and a neural network as the discrimination scheme. Firstly, the fully convolutional network (FCN) is applied to separate the sea from the land. Then, by analyzing the sea clutter distribution in GF-3 SAR images, we choose the probability distribution model of Constant False Alarm Rate (CFAR) detector from K-distribution, Gamma distribution and Rayleigh distribution based on a tradeoff between the sea clutter modeling accuracy and the computational complexity. Furthermore, in order to better implement CFAR detection, we also use truncated statistic (TS) as a preprocessing scheme and iterative censoring scheme (ICS) for boosting the performance of detector. Finally, we employ a neural network to re-examine the results as the discrimination stage. Experiment results on three GF-3 SAR images verify the effectiveness and efficiency of this approach.

  10. A DEEP NARROWBAND IMAGING SEARCH FOR C iv AND He ii EMISSION FROM Lyα BLOBS

    Energy Technology Data Exchange (ETDEWEB)

    Battaia, Fabrizio Arrigoni; Yang, Yujin; Hennawi, Joseph F. [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Prochaska, J. Xavier [Department of Astronomy and Astrophysics, University of California, 1156 High Street, Santa Cruz, California 95064 (United States); Matsuda, Yuichi [National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588 (Japan); Yamada, Toru [Astronomical Institute, Tohoku University, Aramaki, Aoba-ku, Sendai, Miyagi 980-8578 (Japan); Hayashino, Tomoki, E-mail: arrigoni@mpia.de [Research Center for Neutrino Science, Graduate School of Science, Tohoku University, Sendai 980-8578 (Japan)

    2015-05-01

    We conduct a deep narrowband imaging survey of 13 Lyα blobs (LABs) located in the SSA22 proto-cluster at z ∼ 3.1 in the C iv and He ii emission lines in an effort to constrain the physical process powering the Lyα emission in LABs. Our observations probe down to unprecedented surface brightness (SB) limits of (2.1–3.4) × 10{sup −18} erg s{sup −1} cm{sup −2} arcsec{sup −2} per 1 arcsec{sup 2} aperture (5σ) for the He ii λ1640 and C iv λ1549 lines, respectively. We do not detect extended He ii and C iv emission in any of the LABs, placing strong upper limits on the He ii/Lyα and C iv/Lyα line ratios, of 0.11 and 0.16, for the brightest two LABs in the field. We conduct detailed photoionization modeling of the expected line ratios and find that, although our data constitute the deepest ever observations of these lines, they are still not deep enough to rule out a scenario where the Lyα emission is powered by the ionizing radiation from an obscured active galactic nucleus. Our models can accommodate He ii/Lyα and C iv/Lyα ratios as low as ≃0.05 and ≃0.07, respectively, implying that one needs to reach SB as low as (1–1.5) × 10{sup −18} erg s{sup −1} cm{sup −2} arcsec{sup −2} (at 5σ) in order to rule out a photoionization scenario. These depths will be achievable with the new generation of image-slicing integral field units such as the Multi Unit Spectroscopic Explorer (MUSE) on VLT and the Keck Cosmic Web Imager (KCWI). We also model the expected He ii/Lyα and C iv/Lyα in a different scenario, where Lyα emission is powered by shocks generated in a large-scale superwind, but find that our observational constraints can only be met for shock velocities v{sub s} ≳ 250 km s{sup −1}, which appear to be in conflict with recent observations of quiescent kinematics in LABs.