WorldWideScience

Sample records for network-error correcting convolutional

  1. An upper bound on the number of errors corrected by a convolutional code

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2000-01-01

    The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length.......The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length....

  2. Upper bounds on the number of errors corrected by a convolutional code

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2004-01-01

    We derive upper bounds on the weights of error patterns that can be corrected by a convolutional code with given parameters, or equivalently we give bounds on the code rate for a given set of error patterns. The bounds parallel the Hamming bound for block codes by relating the number of error...

  3. Linear network error correction coding

    CERN Document Server

    Guang, Xuan

    2014-01-01

    There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an

  4. Convolutional Codes with Maximum Column Sum Rank for Network Streaming

    OpenAIRE

    Mahmood, Rafid; Badr, Ahmed; Khisti, Ashish

    2015-01-01

    The column Hamming distance of a convolutional code determines the error correction capability when streaming over a class of packet erasure channels. We introduce a metric known as the column sum rank, that parallels column Hamming distance when streaming over a network with link failures. We prove rank analogues of several known column Hamming distance properties and introduce a new family of convolutional codes that maximize the column sum rank up to the code memory. Our construction invol...

  5. On the Design of Error-Correcting Ciphers

    Directory of Open Access Journals (Sweden)

    Mathur Chetan Nanjunda

    2006-01-01

    Full Text Available Securing transmission over a wireless network is especially challenging, not only because of the inherently insecure nature of the medium, but also because of the highly error-prone nature of the wireless environment. In this paper, we take a joint encryption-error correction approach to ensure secure and robust communication over the wireless link. In particular, we design an error-correcting cipher (called the high diffusion cipher and prove bounds on its error-correcting capacity as well as its security. Towards this end, we propose a new class of error-correcting codes (HD-codes with built-in security features that we use in the diffusion layer of the proposed cipher. We construct an example, 128-bit cipher using the HD-codes, and compare it experimentally with two traditional concatenated systems: (a AES (Rijndael followed by Reed-Solomon codes, (b Rijndael followed by convolutional codes. We show that the HD-cipher is as resistant to linear and differential cryptanalysis as the Rijndael. We also show that any chosen plaintext attack that can be performed on the HD cipher can be transformed into a chosen plaintext attack on the Rijndael cipher. In terms of error correction capacity, the traditional systems using Reed-Solomon codes are comparable to the proposed joint error-correcting cipher and those that use convolutional codes require more data expansion in order to achieve similar error correction as the HD-cipher. The original contributions of this work are (1 design of a new joint error-correction-encryption system, (2 design of a new class of algebraic codes with built-in security criteria, called the high diffusion codes (HD-codes for use in the HD-cipher, (3 mathematical properties of these codes, (4 methods for construction of the codes, (5 bounds on the error-correcting capacity of the HD-cipher, (6 mathematical derivation of the bound on resistance of HD cipher to linear and differential cryptanalysis, (7 experimental comparison

  6. Error estimation of deformable image registration of pulmonary CT scans using convolutional neural networks.

    Science.gov (United States)

    Eppenhof, Koen A J; Pluim, Josien P W

    2018-04-01

    Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.

  7. Enhanced online convolutional neural networks for object tracking

    Science.gov (United States)

    Zhang, Dengzhuo; Gao, Yun; Zhou, Hao; Li, Tianwen

    2018-04-01

    In recent several years, object tracking based on convolution neural network has gained more and more attention. The initialization and update of convolution filters can directly affect the precision of object tracking effective. In this paper, a novel object tracking via an enhanced online convolution neural network without offline training is proposed, which initializes the convolution filters by a k-means++ algorithm and updates the filters by an error back-propagation. The comparative experiments of 7 trackers on 15 challenging sequences showed that our tracker can perform better than other trackers in terms of AUC and precision.

  8. Passive quantum error correction of linear optics networks through error averaging

    Science.gov (United States)

    Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.

    2018-02-01

    We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.

  9. Biometrics encryption combining palmprint with two-layer error correction codes

    Science.gov (United States)

    Li, Hengjian; Qiu, Jian; Dong, Jiwen; Feng, Guang

    2017-07-01

    To bridge the gap between the fuzziness of biometrics and the exactitude of cryptography, based on combining palmprint with two-layer error correction codes, a novel biometrics encryption method is proposed. Firstly, the randomly generated original keys are encoded by convolutional and cyclic two-layer coding. The first layer uses a convolution code to correct burst errors. The second layer uses cyclic code to correct random errors. Then, the palmprint features are extracted from the palmprint images. Next, they are fused together by XORing operation. The information is stored in a smart card. Finally, the original keys extraction process is the information in the smart card XOR the user's palmprint features and then decoded with convolutional and cyclic two-layer code. The experimental results and security analysis show that it can recover the original keys completely. The proposed method is more secure than a single password factor, and has higher accuracy than a single biometric factor.

  10. Neural network decoder for quantum error correcting codes

    Science.gov (United States)

    Krastanov, Stefan; Jiang, Liang

    Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.

  11. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks.

    Science.gov (United States)

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-02-03

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.

  12. Multi-Scale Residual Convolutional Neural Network for Haze Removal of Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Hou Jiang

    2018-06-01

    Full Text Available Haze removal is a pre-processing step that operates on at-sensor radiance data prior to the physically based image correction step to enhance hazy imagery visually. Most current haze removal methods focus on point-to-point operations and utilize information in the spectral domain, without taking consideration of the multi-scale spatial information of haze. In this paper, we propose a multi-scale residual convolutional neural network (MRCNN for haze removal of remote sensing images. MRCNN utilizes 3D convolutional kernels to extract spatial–spectral correlation information and abstract features from surrounding neighborhoods for haze transmission estimation. It takes advantage of dilated convolution to aggregate multi-scale contextual information for the purpose of improving its prediction accuracy. Meanwhile, residual learning is utilized to avoid the loss of weak information while deepening the network. Our experiments indicate that MRCNN performs accurately, achieving an extremely low validation error and testing error. The haze removal results of several scenes of Landsat 8 Operational Land Imager (OLI data show that the visibility of the dehazed images is significantly improved, and the color of recovered surface is consistent with the actual scene. Quantitative analysis proves that the dehazed results of MRCNN are superior to the traditional methods and other networks. Additionally, a comparison to haze-free data illustrates the spectral consistency after haze removal and reveals the changes in the vegetation index.

  13. An investigation of error correcting techniques for OMV and AXAF

    Science.gov (United States)

    Ingels, Frank; Fryer, John

    1991-01-01

    The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.

  14. Neural Network Based Real-time Correction of Transducer Dynamic Errors

    Science.gov (United States)

    Roj, J.

    2013-12-01

    In order to carry out real-time dynamic error correction of transducers described by a linear differential equation, a novel recurrent neural network was developed. The network structure is based on solving this equation with respect to the input quantity when using the state variables. It is shown that such a real-time correction can be carried out using simple linear perceptrons. Due to the use of a neural technique, knowledge of the dynamic parameters of the transducer is not necessary. Theoretical considerations are illustrated by the results of simulation studies performed for the modeled second order transducer. The most important properties of the neural dynamic error correction, when emphasizing the fundamental advantages and disadvantages, are discussed.

  15. Tensor Networks and Quantum Error Correction

    Science.gov (United States)

    Ferris, Andrew J.; Poulin, David

    2014-07-01

    We establish several relations between quantum error correction (QEC) and tensor network (TN) methods of quantum many-body physics. We exhibit correspondences between well-known families of QEC codes and TNs, and demonstrate a formal equivalence between decoding a QEC code and contracting a TN. We build on this equivalence to propose a new family of quantum codes and decoding algorithms that generalize and improve upon quantum polar codes and successive cancellation decoding in a natural way.

  16. Neural network error correction for solving coupled ordinary differential equations

    Science.gov (United States)

    Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.

    1992-01-01

    A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.

  17. Applying Gradient Descent in Convolutional Neural Networks

    Science.gov (United States)

    Cui, Nan

    2018-04-01

    With the development of the integrated circuit and computer science, people become caring more about solving practical issues via information technologies. Along with that, a new subject called Artificial Intelligent (AI) comes up. One popular research interest of AI is about recognition algorithm. In this paper, one of the most common algorithms, Convolutional Neural Networks (CNNs) will be introduced, for image recognition. Understanding its theory and structure is of great significance for every scholar who is interested in this field. Convolution Neural Network is an artificial neural network which combines the mathematical method of convolution and neural network. The hieratical structure of CNN provides it reliable computer speed and reasonable error rate. The most significant characteristics of CNNs are feature extraction, weight sharing and dimension reduction. Meanwhile, combining with the Back Propagation (BP) mechanism and the Gradient Descent (GD) method, CNNs has the ability to self-study and in-depth learning. Basically, BP provides an opportunity for backwardfeedback for enhancing reliability and GD is used for self-training process. This paper mainly discusses the CNN and the related BP and GD algorithms, including the basic structure and function of CNN, details of each layer, the principles and features of BP and GD, and some examples in practice with a summary in the end.

  18. Down image recognition based on deep convolutional neural network

    Directory of Open Access Journals (Sweden)

    Wenzhu Yang

    2018-06-01

    Full Text Available Since of the scale and the various shapes of down in the image, it is difficult for traditional image recognition method to correctly recognize the type of down image and get the required recognition accuracy, even for the Traditional Convolutional Neural Network (TCNN. To deal with the above problems, a Deep Convolutional Neural Network (DCNN for down image classification is constructed, and a new weight initialization method is proposed. Firstly, the salient regions of a down image were cut from the image using the visual saliency model. Then, these salient regions of the image were used to train a sparse autoencoder and get a collection of convolutional filters, which accord with the statistical characteristics of dataset. At last, a DCNN with Inception module and its variants was constructed. To improve the recognition accuracy, the depth of the network is deepened. The experiment results indicate that the constructed DCNN increases the recognition accuracy by 2.7% compared to TCNN, when recognizing the down in the images. The convergence rate of the proposed DCNN with the new weight initialization method is improved by 25.5% compared to TCNN. Keywords: Deep convolutional neural network, Weight initialization, Sparse autoencoder, Visual saliency model, Image recognition

  19. An Implementation of Error Minimization Data Transmission in OFDM using Modified Convolutional Code

    Directory of Open Access Journals (Sweden)

    Hendy Briantoro

    2016-04-01

    Full Text Available This paper presents about error minimization in OFDM system. In conventional system, usually using channel coding such as BCH Code or Convolutional Code. But, performance BCH Code or Convolutional Code is not good in implementation of OFDM System. Error bits of OFDM system without channel coding is 5.77%. Then, we used convolutional code with code rate 1/2, it can reduce error bitsonly up to 3.85%. So, we proposed OFDM system with Modified Convolutional Code. In this implementation, we used Software Define Radio (SDR, namely Universal Software Radio Peripheral (USRP NI 2920 as the transmitter and receiver. The result of OFDM system using Modified Convolutional Code with code rate is able recover all character received so can decrease until 0% error bit. Increasing performance of Modified Convolutional Code is about 1 dB in BER of 10-4 from BCH Code and Convolutional Code. So, performance of Modified Convolutional better than BCH Code or Convolutional Code. Keywords: OFDM, BCH Code, Convolutional Code, Modified Convolutional Code, SDR, USRP

  20. Polynomial theory of error correcting codes

    CERN Document Server

    Cancellieri, Giovanni

    2015-01-01

    The book offers an original view on channel coding, based on a unitary approach to block and convolutional codes for error correction. It presents both new concepts and new families of codes. For example, lengthened and modified lengthened cyclic codes are introduced as a bridge towards time-invariant convolutional codes and their extension to time-varying versions. The novel families of codes include turbo codes and low-density parity check (LDPC) codes, the features of which are justified from the structural properties of the component codes. Design procedures for regular LDPC codes are proposed, supported by the presented theory. Quasi-cyclic LDPC codes, in block or convolutional form, represent one of the most original contributions of the book. The use of more than 100 examples allows the reader gradually to gain an understanding of the theory, and the provision of a list of more than 150 definitions, indexed at the end of the book, permits rapid location of sought information.

  1. Rock images classification by using deep convolution neural network

    Science.gov (United States)

    Cheng, Guojian; Guo, Wenhui

    2017-08-01

    Granularity analysis is one of the most essential issues in authenticate under microscope. To improve the efficiency and accuracy of traditional manual work, an convolutional neural network based method is proposed for granularity analysis from thin section image, which chooses and extracts features from image samples while build classifier to recognize granularity of input image samples. 4800 samples from Ordos basin are used for experiments under colour spaces of HSV, YCbCr and RGB respectively. On the test dataset, the correct rate in RGB colour space is 98.5%, and it is believable in HSV and YCbCr colour space. The results show that the convolution neural network can classify the rock images with high reliability.

  2. Generalization error analysis: deep convolutional neural network in mammography

    Science.gov (United States)

    Richter, Caleb D.; Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir; Cha, Kenny

    2018-02-01

    We conducted a study to gain understanding of the generalizability of deep convolutional neural networks (DCNNs) given their inherent capability to memorize data. We examined empirically a specific DCNN trained for classification of masses on mammograms. Using a data set of 2,454 lesions from 2,242 mammographic views, a DCNN was trained to classify masses into malignant and benign classes using transfer learning from ImageNet LSVRC-2010. We performed experiments with varying amounts of label corruption and types of pixel randomization to analyze the generalization error for the DCNN. Performance was evaluated using the area under the receiver operating characteristic curve (AUC) with an N-fold cross validation. Comparisons were made between the convergence times, the inference AUCs for both the training set and the test set of the original image patches without corruption, and the root-mean-squared difference (RMSD) in the layer weights of the DCNN trained with different amounts and methods of corruption. Our experiments observed trends which revealed that the DCNN overfitted by memorizing corrupted data. More importantly, this study improved our understanding of DCNN weight updates when learning new patterns or new labels. Although we used a specific classification task with the ImageNet as example, similar methods may be useful for analysis of the DCNN learning processes, especially those that employ transfer learning for medical image analysis where sample size is limited and overfitting risk is high.

  3. Application of structured support vector machine backpropagation to a convolutional neural network for human pose estimation.

    Science.gov (United States)

    Witoonchart, Peerajak; Chongstitvatana, Prabhas

    2017-08-01

    In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds

    Science.gov (United States)

    Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.

    2018-05-01

    Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  5. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  6. Applications of deep convolutional neural networks to digitized natural history collections

    Directory of Open Access Journals (Sweden)

    Eric Schuettpelz

    2017-11-01

    Full Text Available Natural history collections contain data that are critical for many scientific endeavors. Recent efforts in mass digitization are generating large datasets from these collections that can provide unprecedented insight. Here, we present examples of how deep convolutional neural networks can be applied in analyses of imaged herbarium specimens. We first demonstrate that a convolutional neural network can detect mercury-stained specimens across a collection with 90% accuracy. We then show that such a network can correctly distinguish two morphologically similar plant families 96% of the time. Discarding the most challenging specimen images increases accuracy to 94% and 99%, respectively. These results highlight the importance of mass digitization and deep learning approaches and reveal how they can together deliver powerful new investigative tools.

  7. Applications of deep convolutional neural networks to digitized natural history collections.

    Science.gov (United States)

    Schuettpelz, Eric; Frandsen, Paul B; Dikow, Rebecca B; Brown, Abel; Orli, Sylvia; Peters, Melinda; Metallo, Adam; Funk, Vicki A; Dorr, Laurence J

    2017-01-01

    Natural history collections contain data that are critical for many scientific endeavors. Recent efforts in mass digitization are generating large datasets from these collections that can provide unprecedented insight. Here, we present examples of how deep convolutional neural networks can be applied in analyses of imaged herbarium specimens. We first demonstrate that a convolutional neural network can detect mercury-stained specimens across a collection with 90% accuracy. We then show that such a network can correctly distinguish two morphologically similar plant families 96% of the time. Discarding the most challenging specimen images increases accuracy to 94% and 99%, respectively. These results highlight the importance of mass digitization and deep learning approaches and reveal how they can together deliver powerful new investigative tools.

  8. REAL-TIME VIDEO SCALING BASED ON CONVOLUTION NEURAL NETWORK ARCHITECTURE

    Directory of Open Access Journals (Sweden)

    S Safinaz

    2017-08-01

    Full Text Available In recent years, video super resolution techniques becomes mandatory requirements to get high resolution videos. Many super resolution techniques researched but still video super resolution or scaling is a vital challenge. In this paper, we have presented a real-time video scaling based on convolution neural network architecture to eliminate the blurriness in the images and video frames and to provide better reconstruction quality while scaling of large datasets from lower resolution frames to high resolution frames. We compare our outcomes with multiple exiting algorithms. Our extensive results of proposed technique RemCNN (Reconstruction error minimization Convolution Neural Network shows that our model outperforms the existing technologies such as bicubic, bilinear, MCResNet and provide better reconstructed motioning images and video frames. The experimental results shows that our average PSNR result is 47.80474 considering upscale-2, 41.70209 for upscale-3 and 36.24503 for upscale-4 for Myanmar dataset which is very high in contrast to other existing techniques. This results proves our proposed model real-time video scaling based on convolution neural network architecture’s high efficiency and better performance.

  9. Convolutional Neural Network for Image Recognition

    CERN Document Server

    Seifnashri, Sahand

    2015-01-01

    The aim of this project is to use machine learning techniques especially Convolutional Neural Networks for image processing. These techniques can be used for Quark-Gluon discrimination using calorimeters data, but unfortunately I didn’t manage to get the calorimeters data and I just used the Jet data fromminiaodsim(ak4 chs). The Jet data was not good enough for Convolutional Neural Network which is designed for ’image’ recognition. This report is made of twomain part, part one is mainly about implementing Convolutional Neural Network on unphysical data such as MNIST digits and CIFAR-10 dataset and part 2 is about the Jet data.

  10. FULLY CONVOLUTIONAL NETWORKS FOR GROUND CLASSIFICATION FROM LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    A. Rizaldy

    2018-05-01

    Full Text Available Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs. In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN, a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher. The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  11. 3D multi-view convolutional neural networks for lung nodule classification

    Science.gov (United States)

    Kang, Guixia; Hou, Beibei; Zhang, Ningbo

    2017-01-01

    The 3D convolutional neural network (CNN) is able to make full use of the spatial 3D context information of lung nodules, and the multi-view strategy has been shown to be useful for improving the performance of 2D CNN in classifying lung nodules. In this paper, we explore the classification of lung nodules using the 3D multi-view convolutional neural networks (MV-CNN) with both chain architecture and directed acyclic graph architecture, including 3D Inception and 3D Inception-ResNet. All networks employ the multi-view-one-network strategy. We conduct a binary classification (benign and malignant) and a ternary classification (benign, primary malignant and metastatic malignant) on Computed Tomography (CT) images from Lung Image Database Consortium and Image Database Resource Initiative database (LIDC-IDRI). All results are obtained via 10-fold cross validation. As regards the MV-CNN with chain architecture, results show that the performance of 3D MV-CNN surpasses that of 2D MV-CNN by a significant margin. Finally, a 3D Inception network achieved an error rate of 4.59% for the binary classification and 7.70% for the ternary classification, both of which represent superior results for the corresponding task. We compare the multi-view-one-network strategy with the one-view-one-network strategy. The results reveal that the multi-view-one-network strategy can achieve a lower error rate than the one-view-one-network strategy. PMID:29145492

  12. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  13. Face recognition: a convolutional neural-network approach.

    Science.gov (United States)

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  14. Limitations of a convolution method for modeling geometric uncertainties in radiation therapy. I. The effect of shift invariance

    International Nuclear Information System (INIS)

    Craig, Tim; Battista, Jerry; Van Dyk, Jake

    2003-01-01

    Convolution methods have been used to model the effect of geometric uncertainties on dose delivery in radiation therapy. Convolution assumes shift invariance of the dose distribution. Internal inhomogeneities and surface curvature lead to violations of this assumption. The magnitude of the error resulting from violation of shift invariance is not well documented. This issue is addressed by comparing dose distributions calculated using the Convolution method with dose distributions obtained by Direct Simulation. A comparison of conventional Static dose distributions was also made with Direct Simulation. This analysis was performed for phantom geometries and several clinical tumor sites. A modification to the Convolution method to correct for some of the inherent errors is proposed and tested using example phantoms and patients. We refer to this modified method as the Corrected Convolution. The average maximum dose error in the calculated volume (averaged over different beam arrangements in the various phantom examples) was 21% with the Static dose calculation, 9% with Convolution, and reduced to 5% with the Corrected Convolution. The average maximum dose error in the calculated volume (averaged over four clinical examples) was 9% for the Static method, 13% for Convolution, and 3% for Corrected Convolution. While Convolution can provide a superior estimate of the dose delivered when geometric uncertainties are present, the violation of shift invariance can result in substantial errors near the surface of the patient. The proposed Corrected Convolution modification reduces errors near the surface to 3% or less

  15. Transfer Learning with Convolutional Neural Networks for Classification of Abdominal Ultrasound Images.

    Science.gov (United States)

    Cheng, Phillip M; Malhi, Harshawn S

    2017-04-01

    The purpose of this study is to evaluate transfer learning with deep convolutional neural networks for the classification of abdominal ultrasound images. Grayscale images from 185 consecutive clinical abdominal ultrasound studies were categorized into 11 categories based on the text annotation specified by the technologist for the image. Cropped images were rescaled to 256 × 256 resolution and randomized, with 4094 images from 136 studies constituting the training set, and 1423 images from 49 studies constituting the test set. The fully connected layers of two convolutional neural networks based on CaffeNet and VGGNet, previously trained on the 2012 Large Scale Visual Recognition Challenge data set, were retrained on the training set. Weights in the convolutional layers of each network were frozen to serve as fixed feature extractors. Accuracy on the test set was evaluated for each network. A radiologist experienced in abdominal ultrasound also independently classified the images in the test set into the same 11 categories. The CaffeNet network classified 77.3% of the test set images accurately (1100/1423 images), with a top-2 accuracy of 90.4% (1287/1423 images). The larger VGGNet network classified 77.9% of the test set accurately (1109/1423 images), with a top-2 accuracy of VGGNet was 89.7% (1276/1423 images). The radiologist classified 71.7% of the test set images correctly (1020/1423 images). The differences in classification accuracies between both neural networks and the radiologist were statistically significant (p convolutional neural networks may be used to construct effective classifiers for abdominal ultrasound images.

  16. One weird trick for parallelizing convolutional neural networks

    OpenAIRE

    Krizhevsky, Alex

    2014-01-01

    I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural networks.

  17. Semantic segmentation of bioimages using convolutional neural networks

    CSIR Research Space (South Africa)

    Wiehman, S

    2016-07-01

    Full Text Available Convolutional neural networks have shown great promise in both general image segmentation problems as well as bioimage segmentation. In this paper, the application of different convolutional network architectures is explored on the C. elegans live...

  18. VLSI architectures for modern error-correcting codes

    CERN Document Server

    Zhang, Xinmiao

    2015-01-01

    Error-correcting codes are ubiquitous. They are adopted in almost every modern digital communication and storage system, such as wireless communications, optical communications, Flash memories, computer hard drives, sensor networks, and deep-space probing. New-generation and emerging applications demand codes with better error-correcting capability. On the other hand, the design and implementation of those high-gain error-correcting codes pose many challenges. They usually involve complex mathematical computations, and mapping them directly to hardware often leads to very high complexity. VLSI

  19. Off-resonance artifacts correction with convolution in k-space (ORACLE).

    Science.gov (United States)

    Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne

    2012-06-01

    Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.

  20. ID card number detection algorithm based on convolutional neural network

    Science.gov (United States)

    Zhu, Jian; Ma, Hanjie; Feng, Jie; Dai, Leiyan

    2018-04-01

    In this paper, a new detection algorithm based on Convolutional Neural Network is presented in order to realize the fast and convenient ID information extraction in multiple scenarios. The algorithm uses the mobile device equipped with Android operating system to locate and extract the ID number; Use the special color distribution of the ID card, select the appropriate channel component; Use the image threshold segmentation, noise processing and morphological processing to take the binary processing for image; At the same time, the image rotation and projection method are used for horizontal correction when image was tilting; Finally, the single character is extracted by the projection method, and recognized by using Convolutional Neural Network. Through test shows that, A single ID number image from the extraction to the identification time is about 80ms, the accuracy rate is about 99%, It can be applied to the actual production and living environment.

  1. Triple-Error-Correcting Codec ASIC

    Science.gov (United States)

    Jones, Robert E.; Segallis, Greg P.; Boyd, Robert

    1994-01-01

    Coder/decoder constructed on single integrated-circuit chip. Handles data in variety of formats at rates up to 300 Mbps, correcting up to 3 errors per data block of 256 to 512 bits. Helps reduce cost of transmitting data. Useful in many high-data-rate, bandwidth-limited communication systems such as; personal communication networks, cellular telephone networks, satellite communication systems, high-speed computing networks, broadcasting, and high-reliability data-communication links.

  2. Deep multi-scale convolutional neural network for hyperspectral image classification

    Science.gov (United States)

    Zhang, Feng-zhe; Yang, Xia

    2018-04-01

    In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.

  3. Detecting atrial fibrillation by deep convolutional neural networks.

    Science.gov (United States)

    Xia, Yong; Wulan, Naren; Wang, Kuanquan; Zhang, Henggui

    2018-02-01

    Atrial fibrillation (AF) is the most common cardiac arrhythmia. The incidence of AF increases with age, causing high risks of stroke and increased morbidity and mortality. Efficient and accurate diagnosis of AF based on the ECG is valuable in clinical settings and remains challenging. In this paper, we proposed a novel method with high reliability and accuracy for AF detection via deep learning. The short-term Fourier transform (STFT) and stationary wavelet transform (SWT) were used to analyze ECG segments to obtain two-dimensional (2-D) matrix input suitable for deep convolutional neural networks. Then, two different deep convolutional neural network models corresponding to STFT output and SWT output were developed. Our new method did not require detection of P or R peaks, nor feature designs for classification, in contrast to existing algorithms. Finally, the performances of the two models were evaluated and compared with those of existing algorithms. Our proposed method demonstrated favorable performances on ECG segments as short as 5 s. The deep convolutional neural network using input generated by STFT, presented a sensitivity of 98.34%, specificity of 98.24% and accuracy of 98.29%. For the deep convolutional neural network using input generated by SWT, a sensitivity of 98.79%, specificity of 97.87% and accuracy of 98.63% was achieved. The proposed method using deep convolutional neural networks shows high sensitivity, specificity and accuracy, and, therefore, is a valuable tool for AF detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Fully convolutional network with cluster for semantic segmentation

    Science.gov (United States)

    Ma, Xiao; Chen, Zhongbi; Zhang, Jianlin

    2018-04-01

    At present, image semantic segmentation technology has been an active research topic for scientists in the field of computer vision and artificial intelligence. Especially, the extensive research of deep neural network in image recognition greatly promotes the development of semantic segmentation. This paper puts forward a method based on fully convolutional network, by cluster algorithm k-means. The cluster algorithm using the image's low-level features and initializing the cluster centers by the super-pixel segmentation is proposed to correct the set of points with low reliability, which are mistakenly classified in great probability, by the set of points with high reliability in each clustering regions. This method refines the segmentation of the target contour and improves the accuracy of the image segmentation.

  5. Convolutional Neural Networks for Text Categorization: Shallow Word-level vs. Deep Character-level

    OpenAIRE

    Johnson, Rie; Zhang, Tong

    2016-01-01

    This paper reports the performances of shallow word-level convolutional neural networks (CNN), our earlier work (2015), on the eight datasets with relatively large training data that were used for testing the very deep character-level CNN in Conneau et al. (2016). Our findings are as follows. The shallow word-level CNNs achieve better error rates than the error rates reported in Conneau et al., though the results should be interpreted with some consideration due to the unique pre-processing o...

  6. Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks

    Directory of Open Access Journals (Sweden)

    Haiyang Yu

    2017-06-01

    Full Text Available Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs, for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs and long short-term memory (LSTM neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction.

  7. Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks.

    Science.gov (United States)

    Yu, Haiyang; Wu, Zhihai; Wang, Shuqin; Wang, Yunpeng; Ma, Xiaolei

    2017-06-26

    Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs), for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs) and long short-term memory (LSTM) neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction.

  8. DCMDN: Deep Convolutional Mixture Density Network

    Science.gov (United States)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  9. Neurometaplasticity: Glucoallostasis control of plasticity of the neural networks of error commission, detection, and correction modulates neuroplasticity to influence task precision

    Science.gov (United States)

    Welcome, Menizibeya O.; Dane, Şenol; Mastorakis, Nikos E.; Pereverzev, Vladimir A.

    2017-12-01

    The term "metaplasticity" is a recent one, which means plasticity of synaptic plasticity. Correspondingly, neurometaplasticity simply means plasticity of neuroplasticity, indicating that a previous plastic event determines the current plasticity of neurons. Emerging studies suggest that neurometaplasticity underlie many neural activities and neurobehavioral disorders. In our previous work, we indicated that glucoallostasis is essential for the control of plasticity of the neural network that control error commission, detection and correction. Here we review recent works, which suggest that task precision depends on the modulatory effects of neuroplasticity on the neural networks of error commission, detection, and correction. Furthermore, we discuss neurometaplasticity and its role in error commission, detection, and correction.

  10. Prediction of Electricity Usage Using Convolutional Neural Networks

    OpenAIRE

    Hansen, Martin

    2017-01-01

    Master's thesis Information- and communication technology IKT590 - University of Agder 2017 Convolutional Neural Networks are overwhelmingly accurate when attempting to predict numbers using the famous MNIST-dataset. In this paper, we are attempting to transcend these results for time- series forecasting, and compare them with several regression mod- els. The Convolutional Neural Network model predicted the same value through the entire time lapse in contrast with the other ...

  11. An Improved Convolutional Neural Network on Crowd Density Estimation

    Directory of Open Access Journals (Sweden)

    Pan Shao-Yun

    2016-01-01

    Full Text Available In this paper, a new method is proposed for crowd density estimation. An improved convolutional neural network is combined with traditional texture feature. The data calculated by the convolutional layer can be treated as a new kind of features.So more useful information of images can be extracted by different features.In the meantime, the size of image has little effect on the result of convolutional neural network. Experimental results indicate that our scheme has adequate performance to allow for its use in real world applications.

  12. Single image super-resolution based on convolutional neural networks

    Science.gov (United States)

    Zou, Lamei; Luo, Ming; Yang, Weidong; Li, Peng; Jin, Liujia

    2018-03-01

    We present a deep learning method for single image super-resolution (SISR). The proposed approach learns end-to-end mapping between low-resolution (LR) images and high-resolution (HR) images. The mapping is represented as a deep convolutional neural network which inputs the LR image and outputs the HR image. Our network uses 5 convolution layers, which kernels size include 5×5, 3×3 and 1×1. In our proposed network, we use residual-learning and combine different sizes of convolution kernels at the same layer. The experiment results show that our proposed method performs better than the existing methods in reconstructing quality index and human visual effects on benchmarked images.

  13. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    Science.gov (United States)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  14. Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach

    Science.gov (United States)

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732

  15. Cooperative MIMO communication at wireless sensor network: an error correcting code approach.

    Science.gov (United States)

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.

  16. Classification of urine sediment based on convolution neural network

    Science.gov (United States)

    Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian

    2018-04-01

    By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.

  17. Phylogenetic convolutional neural networks in metagenomics.

    Science.gov (United States)

    Fioravanti, Diego; Giarratano, Ylenia; Maggio, Valerio; Agostinelli, Claudio; Chierici, Marco; Jurman, Giuseppe; Furlanello, Cesare

    2018-03-08

    Convolutional Neural Networks can be effectively used only when data are endowed with an intrinsic concept of neighbourhood in the input space, as is the case of pixels in images. We introduce here Ph-CNN, a novel deep learning architecture for the classification of metagenomics data based on the Convolutional Neural Networks, with the patristic distance defined on the phylogenetic tree being used as the proximity measure. The patristic distance between variables is used together with a sparsified version of MultiDimensional Scaling to embed the phylogenetic tree in a Euclidean space. Ph-CNN is tested with a domain adaptation approach on synthetic data and on a metagenomics collection of gut microbiota of 38 healthy subjects and 222 Inflammatory Bowel Disease patients, divided in 6 subclasses. Classification performance is promising when compared to classical algorithms like Support Vector Machines and Random Forest and a baseline fully connected neural network, e.g. the Multi-Layer Perceptron. Ph-CNN represents a novel deep learning approach for the classification of metagenomics data. Operatively, the algorithm has been implemented as a custom Keras layer taking care of passing to the following convolutional layer not only the data but also the ranked list of neighbourhood of each sample, thus mimicking the case of image data, transparently to the user.

  18. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  19. Machine-learning-assisted correction of correlated qubit errors in a topological code

    Directory of Open Access Journals (Sweden)

    Paul Baireuther

    2018-01-01

    Full Text Available A fault-tolerant quantum computation requires an efficient means to detect and correct errors that accumulate in encoded quantum information. In the context of machine learning, neural networks are a promising new approach to quantum error correction. Here we show that a recurrent neural network can be trained, using only experimentally accessible data, to detect errors in a widely used topological code, the surface code, with a performance above that of the established minimum-weight perfect matching (or blossom decoder. The performance gain is achieved because the neural network decoder can detect correlations between bit-flip (X and phase-flip (Z errors. The machine learning algorithm adapts to the physical system, hence no noise model is needed. The long short-term memory layers of the recurrent neural network maintain their performance over a large number of quantum error correction cycles, making it a practical decoder for forthcoming experimental realizations of the surface code.

  20. Deep Recurrent Convolutional Neural Network: Improving Performance For Speech Recognition

    OpenAIRE

    Zhang, Zewang; Sun, Zheng; Liu, Jiaqi; Chen, Jingwen; Huo, Zhao; Zhang, Xiao

    2016-01-01

    A deep learning approach has been widely applied in sequence modeling problems. In terms of automatic speech recognition (ASR), its performance has significantly been improved by increasing large speech corpus and deeper neural network. Especially, recurrent neural network and deep convolutional neural network have been applied in ASR successfully. Given the arising problem of training speed, we build a novel deep recurrent convolutional network for acoustic modeling and then apply deep resid...

  1. Development and application of deep convolutional neural network in target detection

    Science.gov (United States)

    Jiang, Xiaowei; Wang, Chunping; Fu, Qiang

    2018-04-01

    With the development of big data and algorithms, deep convolution neural networks with more hidden layers have more powerful feature learning and feature expression ability than traditional machine learning methods, making artificial intelligence surpass human level in many fields. This paper first reviews the development and application of deep convolutional neural networks in the field of object detection in recent years, then briefly summarizes and ponders some existing problems in the current research, and the future development of deep convolutional neural network is prospected.

  2. Traffic sign recognition with deep convolutional neural networks

    OpenAIRE

    Karamatić, Boris

    2016-01-01

    The problem of detection and recognition of traffic signs is becoming an important problem when it comes to the development of self driving cars and advanced driver assistance systems. In this thesis we will develop a system for detection and recognition of traffic signs. For the problem of detection we will use aggregate channel features and for the problem of recognition we will use a deep convolutional neural network. We will describe how convolutional neural networks work, how they are co...

  3. Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.

    Science.gov (United States)

    Huang, Yan; Wang, Wei; Wang, Liang

    2018-04-01

    Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.

  4. a Novel Deep Convolutional Neural Network for Spectral-Spatial Classification of Hyperspectral Data

    Science.gov (United States)

    Li, N.; Wang, C.; Zhao, H.; Gong, X.; Wang, D.

    2018-04-01

    Spatial and spectral information are obtained simultaneously by hyperspectral remote sensing. Joint extraction of these information of hyperspectral image is one of most import methods for hyperspectral image classification. In this paper, a novel deep convolutional neural network (CNN) is proposed, which extracts spectral-spatial information of hyperspectral images correctly. The proposed model not only learns sufficient knowledge from the limited number of samples, but also has powerful generalization ability. The proposed framework based on three-dimensional convolution can extract spectral-spatial features of labeled samples effectively. Though CNN has shown its robustness to distortion, it cannot extract features of different scales through the traditional pooling layer that only have one size of pooling window. Hence, spatial pyramid pooling (SPP) is introduced into three-dimensional local convolutional filters for hyperspectral classification. Experimental results with a widely used hyperspectral remote sensing dataset show that the proposed model provides competitive performance.

  5. High Performance Implementation of 3D Convolutional Neural Networks on a GPU

    Science.gov (United States)

    Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie

    2017-01-01

    Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version. PMID:29250109

  6. High Performance Implementation of 3D Convolutional Neural Networks on a GPU.

    Science.gov (United States)

    Lan, Qiang; Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie

    2017-01-01

    Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version.

  7. Segmentation of corneal endothelium images using a U-Net-based convolutional neural network.

    Science.gov (United States)

    Fabijańska, Anna

    2018-04-18

    Diagnostic information regarding the health status of the corneal endothelium may be obtained by analyzing the size and the shape of the endothelial cells in specular microscopy images. Prior to the analysis, the endothelial cells need to be extracted from the image. Up to today, this has been performed manually or semi-automatically. Several approaches to automatic segmentation of endothelial cells exist; however, none of them is perfect. Therefore this paper proposes to perform cell segmentation using a U-Net-based convolutional neural network. Particularly, the network is trained to discriminate pixels located at the borders between cells. The edge probability map outputted by the network is next binarized and skeletonized in order to obtain one-pixel wide edges. The proposed solution was tested on a dataset consisting of 30 corneal endothelial images presenting cells of different sizes, achieving an AUROC level of 0.92. The resulting DICE is on average equal to 0.86, which is a good result, regarding the thickness of the compared edges. The corresponding mean absolute percentage error of cell number is at the level of 4.5% which confirms the high accuracy of the proposed approach. The resulting cell edges are well aligned to the ground truths and require a limited number of manual corrections. This also results in accurate values of the cell morphometric parameters. The corresponding errors range from 5.2% for endothelial cell density, through 6.2% for cell hexagonality to 11.93% for the coefficient of variation of the cell size. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Deformable image registration using convolutional neural networks

    NARCIS (Netherlands)

    Eppenhof, Koen A.J.; Lafarge, Maxime W.; Moeskops, Pim; Veta, Mitko; Pluim, Josien P.W.

    2018-01-01

    Deformable image registration can be time-consuming and often needs extensive parameterization to perform well on a specific application. We present a step towards a registration framework based on a three-dimensional convolutional neural network. The network directly learns transformations between

  9. Research of convolutional neural networks for traffic sign recognition

    OpenAIRE

    Stadalnikas, Kasparas

    2017-01-01

    In this thesis the convolutional neural networks application for traffic sign recognition is analyzed. Thesis describes the basic operations, techniques that are commonly used to apply in the image classification using convolutional neural networks. Also, this paper describes the data sets used for traffic sign recognition, their problems affecting the final training results. The paper reviews most popular existing technologies – frameworks for developing the solution for traffic sign recogni...

  10. Face recognition via Gabor and convolutional neural network

    Science.gov (United States)

    Lu, Tongwei; Wu, Menglu; Lu, Tao

    2018-04-01

    In recent years, the powerful feature learning and classification ability of convolutional neural network have attracted widely attention. Compared with the deep learning, the traditional machine learning algorithm has a good explanatory which deep learning does not have. Thus, In this paper, we propose a method to extract the feature of the traditional algorithm as the input of convolution neural network. In order to reduce the complexity of the network, the kernel function of Gabor wavelet is used to extract the feature from different position, frequency and direction of target image. It is sensitive to edge of image which can provide good direction and scale selection. The extraction of the image from eight directions on a scale are as the input of network that we proposed. The network have the advantage of weight sharing and local connection and texture feature of the input image can reduce the influence of facial expression, gesture and illumination. At the same time, we introduced a layer which combined the results of the pooling and convolution can extract deeper features. The training network used the open source caffe framework which is beneficial to feature extraction. The experiment results of the proposed method proved that the network structure effectively overcame the barrier of illumination and had a good robustness as well as more accurate and rapid than the traditional algorithm.

  11. Traffic sign recognition based on deep convolutional neural network

    Science.gov (United States)

    Yin, Shi-hao; Deng, Ji-cai; Zhang, Da-wei; Du, Jing-yuan

    2017-11-01

    Traffic sign recognition (TSR) is an important component of automated driving systems. It is a rather challenging task to design a high-performance classifier for the TSR system. In this paper, we propose a new method for TSR system based on deep convolutional neural network. In order to enhance the expression of the network, a novel structure (dubbed block-layer below) which combines network-in-network and residual connection is designed. Our network has 10 layers with parameters (block-layer seen as a single layer): the first seven are alternate convolutional layers and block-layers, and the remaining three are fully-connected layers. We train our TSR network on the German traffic sign recognition benchmark (GTSRB) dataset. To reduce overfitting, we perform data augmentation on the training images and employ a regularization method named "dropout". The activation function we employ in our network adopts scaled exponential linear units (SELUs), which can induce self-normalizing properties. To speed up the training, we use an efficient GPU to accelerate the convolutional operation. On the test dataset of GTSRB, we achieve the accuracy rate of 99.67%, exceeding the state-of-the-art results.

  12. Convolutional neural networks for vibrational spectroscopic data analysis.

    Science.gov (United States)

    Acquarelli, Jacopo; van Laarhoven, Twan; Gerretzen, Jan; Tran, Thanh N; Buydens, Lutgarde M C; Marchiori, Elena

    2017-02-15

    In this work we show that convolutional neural networks (CNNs) can be efficiently used to classify vibrational spectroscopic data and identify important spectral regions. CNNs are the current state-of-the-art in image classification and speech recognition and can learn interpretable representations of the data. These characteristics make CNNs a good candidate for reducing the need for preprocessing and for highlighting important spectral regions, both of which are crucial steps in the analysis of vibrational spectroscopic data. Chemometric analysis of vibrational spectroscopic data often relies on preprocessing methods involving baseline correction, scatter correction and noise removal, which are applied to the spectra prior to model building. Preprocessing is a critical step because even in simple problems using 'reasonable' preprocessing methods may decrease the performance of the final model. We develop a new CNN based method and provide an accompanying publicly available software. It is based on a simple CNN architecture with a single convolutional layer (a so-called shallow CNN). Our method outperforms standard classification algorithms used in chemometrics (e.g. PLS) in terms of accuracy when applied to non-preprocessed test data (86% average accuracy compared to the 62% achieved by PLS), and it achieves better performance even on preprocessed test data (96% average accuracy compared to the 89% achieved by PLS). For interpretability purposes, our method includes a procedure for finding important spectral regions, thereby facilitating qualitative interpretation of results. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Efficient and Invariant Convolutional Neural Networks for Dense Prediction

    OpenAIRE

    Gao, Hongyang; Ji, Shuiwang

    2017-01-01

    Convolutional neural networks have shown great success on feature extraction from raw input data such as images. Although convolutional neural networks are invariant to translations on the inputs, they are not invariant to other transformations, including rotation and flip. Recent attempts have been made to incorporate more invariance in image recognition applications, but they are not applicable to dense prediction tasks, such as image segmentation. In this paper, we propose a set of methods...

  14. Neural network scatter correction technique for digital radiography

    International Nuclear Information System (INIS)

    Boone, J.M.

    1990-01-01

    This paper presents a scatter correction technique based on artificial neural networks. The technique utilizes the acquisition of a conventional digital radiographic image, coupled with the acquisition of a multiple pencil beam (micro-aperture) digital image. Image subtraction results in a sparsely sampled estimate of the scatter component in the image. The neural network is trained to develop a causal relationship between image data on the low-pass filtered open field image and the sparsely sampled scatter image, and then the trained network is used to correct the entire image (pixel by pixel) in a manner which is operationally similar to but potentially more powerful than convolution. The technique is described and is illustrated using clinical primary component images combined with scatter component images that are realistically simulated using the results from previously reported Monte Carlo investigations. The results indicate that an accurate scatter correction can be realized using this technique

  15. Can we recognize horses by their ocular biometric traits using deep convolutional neural networks?

    Science.gov (United States)

    Trokielewicz, Mateusz; Szadkowski, Mateusz

    2017-08-01

    This paper aims at determining the viability of horse recognition by the means of ocular biometrics and deep convolutional neural networks (deep CNNs). Fast and accurate identification of race horses before racing is crucial for ensuring that exactly the horses that were declared are participating, using methods that are non-invasive and friendly to these delicate animals. As typical iris recognition methods require lot of fine-tuning of the method parameters and high-quality data, CNNs seem like a natural candidate to be applied for recognition thanks to their potentially excellent abilities in describing texture, combined with ease of implementation in an end-to-end manner. Also, with such approach we can easily utilize both iris and periocular features without constructing complicated algorithms for each. We thus present a simple CNN classifier, able to correctly identify almost 80% of the samples in an identification scenario, and give equal error rate (EER) of less than 10% in a verification scenario.

  16. Learning text representation using recurrent convolutional neural network with highway layers

    OpenAIRE

    Wen, Ying; Zhang, Weinan; Luo, Rui; Wang, Jun

    2016-01-01

    Recently, the rapid development of word embedding and neural networks has brought new inspiration to various NLP and IR tasks. In this paper, we describe a staged hybrid model combining Recurrent Convolutional Neural Networks (RCNN) with highway layers. The highway network module is incorporated in the middle takes the output of the bi-directional Recurrent Neural Network (Bi-RNN) module in the first stage and provides the Convolutional Neural Network (CNN) module in the last stage with the i...

  17. Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.

    Science.gov (United States)

    Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di

    2018-03-06

    Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.

  18. A convolutional neural network neutrino event classifier

    International Nuclear Information System (INIS)

    Aurisano, A.; Sousa, A.; Radovic, A.; Vahle, P.; Rocco, D.; Pawloski, G.; Himmel, A.; Niner, E.; Messier, M.D.; Psihas, F.

    2016-01-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  19. Forecasting short-term data center network traffic load with convolutional neural networks

    Science.gov (United States)

    Ordozgoiti, Bruno; Gómez-Canaval, Sandra

    2018-01-01

    Efficient resource management in data centers is of central importance to content service providers as 90 percent of the network traffic is expected to go through them in the coming years. In this context we propose the use of convolutional neural networks (CNNs) to forecast short-term changes in the amount of traffic crossing a data center network. This value is an indicator of virtual machine activity and can be utilized to shape the data center infrastructure accordingly. The behaviour of network traffic at the seconds scale is highly chaotic and therefore traditional time-series-analysis approaches such as ARIMA fail to obtain accurate forecasts. We show that our convolutional neural network approach can exploit the non-linear regularities of network traffic, providing significant improvements with respect to the mean absolute and standard deviation of the data, and outperforming ARIMA by an increasingly significant margin as the forecasting granularity is above the 16-second resolution. In order to increase the accuracy of the forecasting model, we exploit the architecture of the CNNs using multiresolution input distributed among separate channels of the first convolutional layer. We validate our approach with an extensive set of experiments using a data set collected at the core network of an Internet Service Provider over a period of 5 months, totalling 70 days of traffic at the one-second resolution. PMID:29408936

  20. Forecasting short-term data center network traffic load with convolutional neural networks.

    Science.gov (United States)

    Mozo, Alberto; Ordozgoiti, Bruno; Gómez-Canaval, Sandra

    2018-01-01

    Efficient resource management in data centers is of central importance to content service providers as 90 percent of the network traffic is expected to go through them in the coming years. In this context we propose the use of convolutional neural networks (CNNs) to forecast short-term changes in the amount of traffic crossing a data center network. This value is an indicator of virtual machine activity and can be utilized to shape the data center infrastructure accordingly. The behaviour of network traffic at the seconds scale is highly chaotic and therefore traditional time-series-analysis approaches such as ARIMA fail to obtain accurate forecasts. We show that our convolutional neural network approach can exploit the non-linear regularities of network traffic, providing significant improvements with respect to the mean absolute and standard deviation of the data, and outperforming ARIMA by an increasingly significant margin as the forecasting granularity is above the 16-second resolution. In order to increase the accuracy of the forecasting model, we exploit the architecture of the CNNs using multiresolution input distributed among separate channels of the first convolutional layer. We validate our approach with an extensive set of experiments using a data set collected at the core network of an Internet Service Provider over a period of 5 months, totalling 70 days of traffic at the one-second resolution.

  1. Isointense infant brain MRI segmentation with a dilated convolutional neural network

    NARCIS (Netherlands)

    Moeskops, P.; Pluim, J.P.W.

    2017-01-01

    Quantitative analysis of brain MRI at the age of 6 months is difficult because of the limited contrast between white matter and gray matter. In this study, we use a dilated triplanar convolutional neural network in combination with a non-dilated 3D convolutional neural network for the segmentation

  2. Laser tracker error determination using a network measurement

    International Nuclear Information System (INIS)

    Hughes, Ben; Forbes, Alistair; Lewis, Andrew; Sun, Wenjuan; Veal, Dan; Nasr, Karim

    2011-01-01

    We report on a fast, easily implemented method to determine all the geometrical alignment errors of a laser tracker, to high precision. The technique requires no specialist equipment and can be performed in less than an hour. The technique is based on the determination of parameters of a geometric model of the laser tracker, using measurements of a set of fixed target locations, from multiple locations of the tracker. After fitting of the model parameters to the observed data, the model can be used to perform error correction of the raw laser tracker data or to derive correction parameters in the format of the tracker manufacturer's internal error map. In addition to determination of the model parameters, the method also determines the uncertainties and correlations associated with the parameters. We have tested the technique on a commercial laser tracker in the following way. We disabled the tracker's internal error compensation, and used a five-position, fifteen-target network to estimate all the geometric errors of the instrument. Using the error map generated from this network test, the tracker was able to pass a full performance validation test, conducted according to a recognized specification standard (ASME B89.4.19-2006). We conclude that the error correction determined from the network test is as effective as the manufacturer's own error correction methodologies

  3. Accurate lithography simulation model based on convolutional neural networks

    Science.gov (United States)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  4. Very Deep Convolutional Neural Networks for Morphologic Classification of Erythrocytes.

    Science.gov (United States)

    Durant, Thomas J S; Olson, Eben M; Schulz, Wade L; Torres, Richard

    2017-12-01

    Morphologic profiling of the erythrocyte population is a widely used and clinically valuable diagnostic modality, but one that relies on a slow manual process associated with significant labor cost and limited reproducibility. Automated profiling of erythrocytes from digital images by capable machine learning approaches would augment the throughput and value of morphologic analysis. To this end, we sought to evaluate the performance of leading implementation strategies for convolutional neural networks (CNNs) when applied to classification of erythrocytes based on morphology. Erythrocytes were manually classified into 1 of 10 classes using a custom-developed Web application. Using recent literature to guide architectural considerations for neural network design, we implemented a "very deep" CNN, consisting of >150 layers, with dense shortcut connections. The final database comprised 3737 labeled cells. Ensemble model predictions on unseen data demonstrated a harmonic mean of recall and precision metrics of 92.70% and 89.39%, respectively. Of the 748 cells in the test set, 23 misclassification errors were made, with a correct classification frequency of 90.60%, represented as a harmonic mean across the 10 morphologic classes. These findings indicate that erythrocyte morphology profiles could be measured with a high degree of accuracy with "very deep" CNNs. Further, these data support future efforts to expand classes and optimize practical performance in a clinical environment as a prelude to full implementation as a clinical tool. © 2017 American Association for Clinical Chemistry.

  5. Zero-Echo-Time and Dixon Deep Pseudo-CT (ZeDD CT): Direct Generation of Pseudo-CT Images for Pelvic PET/MRI Attenuation Correction Using Deep Convolutional Neural Networks with Multiparametric MRI.

    Science.gov (United States)

    Leynes, Andrew P; Yang, Jaewon; Wiesinger, Florian; Kaushik, Sandeep S; Shanbhag, Dattesh D; Seo, Youngho; Hope, Thomas A; Larson, Peder E Z

    2018-05-01

    Accurate quantification of uptake on PET images depends on accurate attenuation correction in reconstruction. Current MR-based attenuation correction methods for body PET use a fat and water map derived from a 2-echo Dixon MRI sequence in which bone is neglected. Ultrashort-echo-time or zero-echo-time (ZTE) pulse sequences can capture bone information. We propose the use of patient-specific multiparametric MRI consisting of Dixon MRI and proton-density-weighted ZTE MRI to directly synthesize pseudo-CT images with a deep learning model: we call this method ZTE and Dixon deep pseudo-CT (ZeDD CT). Methods: Twenty-six patients were scanned using an integrated 3-T time-of-flight PET/MRI system. Helical CT images of the patients were acquired separately. A deep convolutional neural network was trained to transform ZTE and Dixon MR images into pseudo-CT images. Ten patients were used for model training, and 16 patients were used for evaluation. Bone and soft-tissue lesions were identified, and the SUV max was measured. The root-mean-squared error (RMSE) was used to compare the MR-based attenuation correction with the ground-truth CT attenuation correction. Results: In total, 30 bone lesions and 60 soft-tissue lesions were evaluated. The RMSE in PET quantification was reduced by a factor of 4 for bone lesions (10.24% for Dixon PET and 2.68% for ZeDD PET) and by a factor of 1.5 for soft-tissue lesions (6.24% for Dixon PET and 4.07% for ZeDD PET). Conclusion: ZeDD CT produces natural-looking and quantitatively accurate pseudo-CT images and reduces error in pelvic PET/MRI attenuation correction compared with standard methods. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.

  6. Adaptive Graph Convolutional Neural Networks

    OpenAIRE

    Li, Ruoyu; Wang, Sheng; Zhu, Feiyun; Huang, Junzhou

    2018-01-01

    Graph Convolutional Neural Networks (Graph CNNs) are generalizations of classical CNNs to handle graph data such as molecular data, point could and social networks. Current filters in graph CNNs are built for fixed and shared graph structure. However, for most real data, the graph structures varies in both size and connectivity. The paper proposes a generalized and flexible graph CNN taking data of arbitrary graph structure as input. In that way a task-driven adaptive graph is learned for eac...

  7. Invariant moments based convolutional neural networks for image analysis

    Directory of Open Access Journals (Sweden)

    Vijayalakshmi G.V. Mahesh

    2017-01-01

    Full Text Available The paper proposes a method using convolutional neural network to effectively evaluate the discrimination between face and non face patterns, gender classification using facial images and facial expression recognition. The novelty of the method lies in the utilization of the initial trainable convolution kernels coefficients derived from the zernike moments by varying the moment order. The performance of the proposed method was compared with the convolutional neural network architecture that used random kernels as initial training parameters. The multilevel configuration of zernike moments was significant in extracting the shape information suitable for hierarchical feature learning to carry out image analysis and classification. Furthermore the results showed an outstanding performance of zernike moment based kernels in terms of the computation time and classification accuracy.

  8. Error-correction coding for digital communications

    Science.gov (United States)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  9. Gas Classification Using Deep Convolutional Neural Networks

    Science.gov (United States)

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-01

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP). PMID:29316723

  10. Gas Classification Using Deep Convolutional Neural Networks.

    Science.gov (United States)

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-08

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP).

  11. Image quality assessment using deep convolutional networks

    Science.gov (United States)

    Li, Yezhou; Ye, Xiang; Li, Yong

    2017-12-01

    This paper proposes a method of accurately assessing image quality without a reference image by using a deep convolutional neural network. Existing training based methods usually utilize a compact set of linear filters for learning features of images captured by different sensors to assess their quality. These methods may not be able to learn the semantic features that are intimately related with the features used in human subject assessment. Observing this drawback, this work proposes training a deep convolutional neural network (CNN) with labelled images for image quality assessment. The ReLU in the CNN allows non-linear transformations for extracting high-level image features, providing a more reliable assessment of image quality than linear filters. To enable the neural network to take images of any arbitrary size as input, the spatial pyramid pooling (SPP) is introduced connecting the top convolutional layer and the fully-connected layer. In addition, the SPP makes the CNN robust to object deformations to a certain extent. The proposed method taking an image as input carries out an end-to-end learning process, and outputs the quality of the image. It is tested on public datasets. Experimental results show that it outperforms existing methods by a large margin and can accurately assess the image quality on images taken by different sensors of varying sizes.

  12. Constructing fine-granularity functional brain network atlases via deep convolutional autoencoder.

    Science.gov (United States)

    Zhao, Yu; Dong, Qinglin; Chen, Hanbo; Iraji, Armin; Li, Yujie; Makkie, Milad; Kou, Zhifeng; Liu, Tianming

    2017-12-01

    State-of-the-art functional brain network reconstruction methods such as independent component analysis (ICA) or sparse coding of whole-brain fMRI data can effectively infer many thousands of volumetric brain network maps from a large number of human brains. However, due to the variability of individual brain networks and the large scale of such networks needed for statistically meaningful group-level analysis, it is still a challenging and open problem to derive group-wise common networks as network atlases. Inspired by the superior spatial pattern description ability of the deep convolutional neural networks (CNNs), a novel deep 3D convolutional autoencoder (CAE) network is designed here to extract spatial brain network features effectively, based on which an Apache Spark enabled computational framework is developed for fast clustering of larger number of network maps into fine-granularity atlases. To evaluate this framework, 10 resting state networks (RSNs) were manually labeled from the sparsely decomposed networks of Human Connectome Project (HCP) fMRI data and 5275 network training samples were obtained, in total. Then the deep CAE models are trained by these functional networks' spatial maps, and the learned features are used to refine the original 10 RSNs into 17 network atlases that possess fine-granularity functional network patterns. Interestingly, it turned out that some manually mislabeled outliers in training networks can be corrected by the deep CAE derived features. More importantly, fine granularities of networks can be identified and they reveal unique network patterns specific to different brain task states. By further applying this method to a dataset of mild traumatic brain injury study, it shows that the technique can effectively identify abnormal small networks in brain injury patients in comparison with controls. In general, our work presents a promising deep learning and big data analysis solution for modeling functional connectomes, with

  13. Weed Growth Stage Estimator Using Deep Convolutional Neural Networks

    DEFF Research Database (Denmark)

    Teimouri, Nima; Dyrmann, Mads; Nielsen, Per Rydahl

    2018-01-01

    conditions with regards to soil types, resolution and light settings. Then, 9649 of these images were used for training the computer, which automatically divided the weeds into nine growth classes. The performance of this proposed convolutional neural network approach was evaluated on a further set of 2516...... in estimating the number of leaves and 96% accuracy when accepting a deviation of two leaves. These results show that this new method of using deep convolutional neural networks has a relatively high ability to estimate early growth stages across a wide variety of weed species....

  14. Convolutional Neural Networks - Generalizability and Interpretations

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David

    from data despite it being limited in amount or context representation. Within Machine Learning this thesis focuses on Convolutional Neural Networks for Computer Vision. The research aims to answer how to explore a model's generalizability to the whole population of data samples and how to interpret...

  15. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  16. Design and Implementation of Behavior Recognition System Based on Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Yu Bo

    2017-01-01

    Full Text Available We build a set of human behavior recognition system based on the convolution neural network constructed for the specific human behavior in public places. Firstly, video of human behavior data set will be segmented into images, then we process the images by the method of background subtraction to extract moving foreground characters of body. Secondly, the training data sets are trained into the designed convolution neural network, and the depth learning network is constructed by stochastic gradient descent. Finally, the various behaviors of samples are classified and identified with the obtained network model, and the recognition results are compared with the current mainstream methods. The result show that the convolution neural network can study human behavior model automatically and identify human’s behaviors without any manually annotated trainings.

  17. Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction

    Directory of Open Access Journals (Sweden)

    Tianzhou Chen

    2013-09-01

    Full Text Available Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation.

  18. Plant species classification using deep convolutional neural network

    DEFF Research Database (Denmark)

    Dyrmann, Mads; Karstoft, Henrik; Midtiby, Henrik Skov

    2016-01-01

    Information on which weed species are present within agricultural fields is important for site specific weed management. This paper presents a method that is capable of recognising plant species in colour images by using a convolutional neural network. The network is built from scratch trained an...

  19. A convolutional neural network to filter artifacts in spectroscopic MRI.

    Science.gov (United States)

    Gurbani, Saumya S; Schreibmann, Eduard; Maudsley, Andrew A; Cordova, James Scott; Soher, Brian J; Poptani, Harish; Verma, Gaurav; Barker, Peter B; Shim, Hyunsuk; Cooper, Lee A D

    2018-03-09

    Proton MRSI is a noninvasive modality capable of generating volumetric maps of in vivo tissue metabolism without the need for ionizing radiation or injected contrast agent. Magnetic resonance spectroscopic imaging has been shown to be a viable imaging modality for studying several neuropathologies. However, a key hurdle in the routine clinical adoption of MRSI is the presence of spectral artifacts that can arise from a number of sources, possibly leading to false information. A deep learning model was developed that was capable of identifying and filtering out poor quality spectra. The core of the model used a tiled convolutional neural network that analyzed frequency-domain spectra to detect artifacts. When compared with a panel of MRS experts, our convolutional neural network achieved high sensitivity and specificity with an area under the curve of 0.95. A visualization scheme was implemented to better understand how the convolutional neural network made its judgement on single-voxel or multivoxel MRSI, and the convolutional neural network was embedded into a pipeline capable of producing whole-brain spectroscopic MRI volumes in real time. The fully automated method for assessment of spectral quality provides a valuable tool to support clinical MRSI or spectroscopic MRI studies for use in fields such as adaptive radiation therapy planning. © 2018 International Society for Magnetic Resonance in Medicine.

  20. High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks.

    Science.gov (United States)

    Rajkomar, Alvin; Lingam, Sneha; Taylor, Andrew G; Blum, Michael; Mongan, John

    2017-02-01

    The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73-100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.

  1. Operator quantum error-correcting subsystems for self-correcting quantum memories

    International Nuclear Information System (INIS)

    Bacon, Dave

    2006-01-01

    The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. Recently this notion has led to a more general notion of quantum error correction known as operator quantum error correction. In standard quantum error-correcting codes, one requires the ability to apply a procedure which exactly reverses on the error-correcting subspace any correctable error. In contrast, for operator error-correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform corrections only modulo the subsystem structure. This does not lead to codes which differ from subspace codes, but does lead to recovery routines which explicitly make use of the subsystem structure. Here we present two examples of such operator error-correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature quantum memory, robust to noise without external intervening quantum error-correction procedures

  2. Adaptive decoding of convolutional codes

    Science.gov (United States)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  3. Transfer Error and Correction Approach in Mobile Network

    Science.gov (United States)

    Xiao-kai, Wu; Yong-jin, Shi; Da-jin, Chen; Bing-he, Ma; Qi-li, Zhou

    With the development of information technology and social progress, human demand for information has become increasingly diverse, wherever and whenever people want to be able to easily, quickly and flexibly via voice, data, images and video and other means to communicate. Visual information to the people direct and vivid image, image / video transmission also been widespread attention. Although the third generation mobile communication systems and the emergence and rapid development of IP networks, making video communications is becoming the main business of the wireless communications, however, the actual wireless and IP channel will lead to error generation, such as: wireless channel multi- fading channels generated error and blocking IP packet loss and so on. Due to channel bandwidth limitations, the video communication compression coding of data is often beyond the data, and compress data after the error is very sensitive to error conditions caused a serious decline in image quality.

  4. Efficient forward propagation of time-sequences in convolutional neural networks using Deep Shifting

    NARCIS (Netherlands)

    K.L. Groenland (Koen); S.M. Bohte (Sander)

    2016-01-01

    textabstractWhen a Convolutional Neural Network is used for on-the-fly evaluation of continuously updating time-sequences, many redundant convolution operations are performed. We propose the method of Deep Shifting, which remembers previously calculated results of convolution operations in order

  5. BrainNetCNN: Convolutional neural networks for brain networks; towards predicting neurodevelopment.

    Science.gov (United States)

    Kawahara, Jeremy; Brown, Colin J; Miller, Steven P; Booth, Brian G; Chau, Vann; Grunau, Ruth E; Zwicker, Jill G; Hamarneh, Ghassan

    2017-02-01

    We propose BrainNetCNN, a convolutional neural network (CNN) framework to predict clinical neurodevelopmental outcomes from brain networks. In contrast to the spatially local convolutions done in traditional image-based CNNs, our BrainNetCNN is composed of novel edge-to-edge, edge-to-node and node-to-graph convolutional filters that leverage the topological locality of structural brain networks. We apply the BrainNetCNN framework to predict cognitive and motor developmental outcome scores from structural brain networks of infants born preterm. Diffusion tensor images (DTI) of preterm infants, acquired between 27 and 46 weeks gestational age, were used to construct a dataset of structural brain connectivity networks. We first demonstrate the predictive capabilities of BrainNetCNN on synthetic phantom networks with simulated injury patterns and added noise. BrainNetCNN outperforms a fully connected neural-network with the same number of model parameters on both phantoms with focal and diffuse injury patterns. We then apply our method to the task of joint prediction of Bayley-III cognitive and motor scores, assessed at 18 months of age, adjusted for prematurity. We show that our BrainNetCNN framework outperforms a variety of other methods on the same data. Furthermore, BrainNetCNN is able to identify an infant's postmenstrual age to within about 2 weeks. Finally, we explore the high-level features learned by BrainNetCNN by visualizing the importance of each connection in the brain with respect to predicting the outcome scores. These findings are then discussed in the context of the anatomy and function of the developing preterm infant brain. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Resting State fMRI Functional Connectivity-Based Classification Using a Convolutional Neural Network Architecture.

    Science.gov (United States)

    Meszlényi, Regina J; Buza, Krisztian; Vidnyánszky, Zoltán

    2017-01-01

    Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network.

  7. Extending Lifetime of Wireless Sensor Networks using Forward Error Correction

    DEFF Research Database (Denmark)

    Donapudi, S U; Obel, C O; Madsen, Jan

    2006-01-01

    Communication between nodes in wireless sensor networks (WSN) is susceptible to transmission errors caused by low signal strength or interference. These errors manifest themselves as lost or corrupt packets. This often leads to retransmission, which in turn results in increased power consumption...

  8. Epileptiform spike detection via convolutional neural networks

    DEFF Research Database (Denmark)

    Johansen, Alexander Rosenberg; Jin, Jing; Maszczyk, Tomasz

    2016-01-01

    The EEG of epileptic patients often contains sharp waveforms called "spikes", occurring between seizures. Detecting such spikes is crucial for diagnosing epilepsy. In this paper, we develop a convolutional neural network (CNN) for detecting spikes in EEG of epileptic patients in an automated...

  9. Towards dropout training for convolutional neural networks.

    Science.gov (United States)

    Wu, Haibing; Gu, Xiaodong

    2015-11-01

    Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. A mixed-scale dense convolutional neural network for image analysis

    NARCIS (Netherlands)

    D.M. Pelt (Daniël); J.A. Sethian (James)

    2016-01-01

    textabstractDeep convolutional neural networks have been successfully applied to many image-processing problems in recent works. Popular network architectures often add additional operations and connections to the standard architecture to enable training deeper networks. To achieve accurate results

  11. Deep Convolutional Neural Networks: Structure, Feature Extraction and Training

    Directory of Open Access Journals (Sweden)

    Namatēvs Ivars

    2017-12-01

    Full Text Available Deep convolutional neural networks (CNNs are aimed at processing data that have a known network like topology. They are widely used to recognise objects in images and diagnose patterns in time series data as well as in sensor data classification. The aim of the paper is to present theoretical and practical aspects of deep CNNs in terms of convolution operation, typical layers and basic methods to be used for training and learning. Some practical applications are included for signal and image classification. Finally, the present paper describes the proposed block structure of CNN for classifying crucial features from 3D sensor data.

  12. Convolutional neural network with transfer learning for rice type classification

    Science.gov (United States)

    Patel, Vaibhav Amit; Joshi, Manjunath V.

    2018-04-01

    Presently, rice type is identified manually by humans, which is time consuming and error prone. Therefore, there is a need to do this by machine which makes it faster with greater accuracy. This paper proposes a deep learning based method for classification of rice types. We propose two methods to classify the rice types. In the first method, we train a deep convolutional neural network (CNN) using the given segmented rice images. In the second method, we train a combination of a pretrained VGG16 network and the proposed method, while using transfer learning in which the weights of a pretrained network are used to achieve better accuracy. Our approach can also be used for classification of rice grain as broken or fine. We train a 5-class model for classifying rice types using 4000 training images and another 2- class model for the classification of broken and normal rice using 1600 training images. We observe that despite having distinct rice images, our architecture, pretrained on ImageNet data boosts classification accuracy significantly.

  13. A locality aware convolutional neural networks accelerator

    NARCIS (Netherlands)

    Shi, R.; Xu, Z.; Sun, Z.; Peemen, M.C.J.; Li, A.; Corporaal, H.; Wu, D.

    2015-01-01

    The advantages of Convolutional Neural Networks (CNNs) with respect to traditional methods for visual pattern recognition have changed the field of machine vision. The main issue that hinders broad adoption of this technique is the massive computing workload in CNN that prevents real-time

  14. Isointense infant brain MRI segmentation with a dilated convolutional neural network

    OpenAIRE

    Moeskops, Pim; Pluim, Josien P. W.

    2017-01-01

    Quantitative analysis of brain MRI at the age of 6 months is difficult because of the limited contrast between white matter and gray matter. In this study, we use a dilated triplanar convolutional neural network in combination with a non-dilated 3D convolutional neural network for the segmentation of white matter, gray matter and cerebrospinal fluid in infant brain MR images, as provided by the MICCAI grand challenge on 6-month infant brain MRI segmentation.

  15. Adaptive decoding of convolutional codes

    Directory of Open Access Journals (Sweden)

    K. Hueske

    2007-06-01

    Full Text Available Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  16. Transforming Musical Signals through a Genre Classifying Convolutional Neural Network

    Science.gov (United States)

    Geng, S.; Ren, G.; Ogihara, M.

    2017-05-01

    Convolutional neural networks (CNNs) have been successfully applied on both discriminative and generative modeling for music-related tasks. For a particular task, the trained CNN contains information representing the decision making or the abstracting process. One can hope to manipulate existing music based on this 'informed' network and create music with new features corresponding to the knowledge obtained by the network. In this paper, we propose a method to utilize the stored information from a CNN trained on musical genre classification task. The network was composed of three convolutional layers, and was trained to classify five-second song clips into five different genres. After training, randomly selected clips were modified by maximizing the sum of outputs from the network layers. In addition to the potential of such CNNs to produce interesting audio transformation, more information about the network and the original music could be obtained from the analysis of the generated features since these features indicate how the network 'understands' the music.

  17. Convolutional Neural Networks for SAR Image Segmentation

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Nobel-Jørgensen, Morten

    2015-01-01

    Segmentation of Synthetic Aperture Radar (SAR) images has several uses, but it is a difficult task due to a number of properties related to SAR images. In this article we show how Convolutional Neural Networks (CNNs) can easily be trained for SAR image segmentation with good results. Besides...

  18. Airplane detection in remote sensing images using convolutional neural networks

    Science.gov (United States)

    Ouyang, Chao; Chen, Zhong; Zhang, Feng; Zhang, Yifei

    2018-03-01

    Airplane detection in remote sensing images remains a challenging problem and has also been taking a great interest to researchers. In this paper we propose an effective method to detect airplanes in remote sensing images using convolutional neural networks. Deep learning methods show greater advantages than the traditional methods with the rise of deep neural networks in target detection, and we give an explanation why this happens. To improve the performance on detection of airplane, we combine a region proposal algorithm with convolutional neural networks. And in the training phase, we divide the background into multi classes rather than one class, which can reduce false alarms. Our experimental results show that the proposed method is effective and robust in detecting airplane.

  19. High-speed parallel forward error correction for optical transport networks

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert

    2010-01-01

    This paper presents a highly parallelized hardware implementation of the standard OTN Reed-Solomon Forward Error Correction algorithm. The proposed circuit is designed to meet the immense throughput required by OTN4, using commercially available FPGA technology....

  20. Correcting AUC for Measurement Error.

    Science.gov (United States)

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  1. Evolutionary image simplification for lung nodule classification with convolutional neural networks.

    Science.gov (United States)

    Lückehe, Daniel; von Voigt, Gabriele

    2018-05-29

    Understanding decisions of deep learning techniques is important. Especially in the medical field, the reasons for a decision in a classification task are as crucial as the pure classification results. In this article, we propose a new approach to compute relevant parts of a medical image. Knowing the relevant parts makes it easier to understand decisions. In our approach, a convolutional neural network is employed to learn structures of images of lung nodules. Then, an evolutionary algorithm is applied to compute a simplified version of an unknown image based on the learned structures by the convolutional neural network. In the simplified version, irrelevant parts are removed from the original image. In the results, we show simplified images which allow the observer to focus on the relevant parts. In these images, more than 50% of the pixels are simplified. The simplified pixels do not change the meaning of the images based on the learned structures by the convolutional neural network. An experimental analysis shows the potential of the approach. Besides the examples of simplified images, we analyze the run time development. Simplified images make it easier to focus on relevant parts and to find reasons for a decision. The combination of an evolutionary algorithm employing a learned convolutional neural network is well suited for the simplification task. From a research perspective, it is interesting which areas of the images are simplified and which parts are taken as relevant.

  2. Acral melanoma detection using a convolutional neural network for dermoscopy images.

    Science.gov (United States)

    Yu, Chanki; Yang, Sejung; Kim, Wonoh; Jung, Jinwoong; Chung, Kee-Yang; Lee, Sang Wook; Oh, Byungho

    2018-01-01

    Acral melanoma is the most common type of melanoma in Asians, and usually results in a poor prognosis due to late diagnosis. We applied a convolutional neural network to dermoscopy images of acral melanoma and benign nevi on the hands and feet and evaluated its usefulness for the early diagnosis of these conditions. A total of 724 dermoscopy images comprising acral melanoma (350 images from 81 patients) and benign nevi (374 images from 194 patients), and confirmed by histopathological examination, were analyzed in this study. To perform the 2-fold cross validation, we split them into two mutually exclusive subsets: half of the total image dataset was selected for training and the rest for testing, and we calculated the accuracy of diagnosis comparing it with the dermatologist's and non-expert's evaluation. The accuracy (percentage of true positive and true negative from all images) of the convolutional neural network was 83.51% and 80.23%, which was higher than the non-expert's evaluation (67.84%, 62.71%) and close to that of the expert (81.08%, 81.64%). Moreover, the convolutional neural network showed area-under-the-curve values like 0.8, 0.84 and Youden's index like 0.6795, 0.6073, which were similar score with the expert. Although further data analysis is necessary to improve their accuracy, convolutional neural networks would be helpful to detect acral melanoma from dermoscopy images of the hands and feet.

  3. Quantum error correction for beginners

    International Nuclear Information System (INIS)

    Devitt, Simon J; Nemoto, Kae; Munro, William J

    2013-01-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  4. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  5. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  6. Fast Automatic Airport Detection in Remote Sensing Images Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Fen Chen

    2018-03-01

    Full Text Available Fast and automatic detection of airports from remote sensing images is useful for many military and civilian applications. In this paper, a fast automatic detection method is proposed to detect airports from remote sensing images based on convolutional neural networks using the Faster R-CNN algorithm. This method first applies a convolutional neural network to generate candidate airport regions. Based on the features extracted from these proposals, it then uses another convolutional neural network to perform airport detection. By taking the typical elongated linear geometric shape of airports into consideration, some specific improvements to the method are proposed. These approaches successfully improve the quality of positive samples and achieve a better accuracy in the final detection results. Experimental results on an airport dataset, Landsat 8 images, and a Gaofen-1 satellite scene demonstrate the effectiveness and efficiency of the proposed method.

  7. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  8. Enhancement of digital radiography image quality using a convolutional neural network.

    Science.gov (United States)

    Sun, Yuewen; Li, Litao; Cong, Peng; Wang, Zhentao; Guo, Xiaojing

    2017-01-01

    Digital radiography system is widely used for noninvasive security check and medical imaging examination. However, the system has a limitation of lower image quality in spatial resolution and signal to noise ratio. In this study, we explored whether the image quality acquired by the digital radiography system can be improved with a modified convolutional neural network to generate high-resolution images with reduced noise from the original low-quality images. The experiment evaluated on a test dataset, which contains 5 X-ray images, showed that the proposed method outperformed the traditional methods (i.e., bicubic interpolation and 3D block-matching approach) as measured by peak signal to noise ratio (PSNR) about 1.3 dB while kept highly efficient processing time within one second. Experimental results demonstrated that a residual to residual (RTR) convolutional neural network remarkably improved the image quality of object structural details by increasing the image resolution and reducing image noise. Thus, this study indicated that applying this RTR convolutional neural network system was useful to improve image quality acquired by the digital radiography system.

  9. Detected-jump-error-correcting quantum codes, quantum error designs, and quantum computation

    International Nuclear Information System (INIS)

    Alber, G.; Mussinger, M.; Beth, Th.; Charnes, Ch.; Delgado, A.; Grassl, M.

    2003-01-01

    The recently introduced detected-jump-correcting quantum codes are capable of stabilizing qubit systems against spontaneous decay processes arising from couplings to statistically independent reservoirs. These embedded quantum codes exploit classical information about which qubit has emitted spontaneously and correspond to an active error-correcting code embedded in a passive error-correcting code. The construction of a family of one-detected-jump-error-correcting quantum codes is shown and the optimal redundancy, encoding, and recovery as well as general properties of detected-jump-error-correcting quantum codes are discussed. By the use of design theory, multiple-jump-error-correcting quantum codes can be constructed. The performance of one-jump-error-correcting quantum codes under nonideal conditions is studied numerically by simulating a quantum memory and Grover's algorithm

  10. Improving deep convolutional neural networks with mixed maxout units.

    Directory of Open Access Journals (Sweden)

    Hui-Zhen Zhao

    Full Text Available Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN that "non-maximal features are unable to deliver" and "feature mapping subspace pooling is insufficient," we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance.

  11. Finding strong lenses in CFHTLS using convolutional neural networks

    Science.gov (United States)

    Jacobs, C.; Glazebrook, K.; Collett, T.; More, A.; McCarthy, C.

    2017-10-01

    We train and apply convolutional neural networks, a machine learning technique developed to learn from and classify image data, to Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) imaging for the identification of potential strong lensing systems. An ensemble of four convolutional neural networks was trained on images of simulated galaxy-galaxy lenses. The training sets consisted of a total of 62 406 simulated lenses and 64 673 non-lens negative examples generated with two different methodologies. An ensemble of trained networks was applied to all of the 171 deg2 of the CFHTLS wide field image data, identifying 18 861 candidates including 63 known and 139 other potential lens candidates. A second search of 1.4 million early-type galaxies selected from the survey catalogue as potential deflectors, identified 2465 candidates including 117 previously known lens candidates, 29 confirmed lenses/high-quality lens candidates, 266 novel probable or potential lenses and 2097 candidates we classify as false positives. For the catalogue-based search we estimate a completeness of 21-28 per cent with respect to detectable lenses and a purity of 15 per cent, with a false-positive rate of 1 in 671 images tested. We predict a human astronomer reviewing candidates produced by the system would identify 20 probable lenses and 100 possible lenses per hour in a sample selected by the robot. Convolutional neural networks are therefore a promising tool for use in the search for lenses in current and forthcoming surveys such as the Dark Energy Survey and the Large Synoptic Survey Telescope.

  12. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  13. Design of nanophotonic circuits for autonomous subsystem quantum error correction

    Energy Technology Data Exchange (ETDEWEB)

    Kerckhoff, J; Pavlichin, D S; Chalabi, H; Mabuchi, H, E-mail: jkerc@stanford.edu [Edward L Ginzton Laboratory, Stanford University, Stanford, CA 94305 (United States)

    2011-05-15

    We reapply our approach to designing nanophotonic quantum memories in order to formulate an optical network that autonomously protects a single logical qubit against arbitrary single-qubit errors. Emulating the nine-qubit Bacon-Shor subsystem code, the network replaces the traditionally discrete syndrome measurement and correction steps by continuous, time-independent optical interactions and coherent feedback of unitarily processed optical fields.

  14. DeepFix: A Fully Convolutional Neural Network for Predicting Human Eye Fixations.

    Science.gov (United States)

    Kruthiventi, Srinivas S S; Ayush, Kumar; Babu, R Venkatesh

    2017-09-01

    Understanding and predicting the human visual attention mechanism is an active area of research in the fields of neuroscience and computer vision. In this paper, we propose DeepFix, a fully convolutional neural network, which models the bottom-up mechanism of visual attention via saliency prediction. Unlike classical works, which characterize the saliency map using various hand-crafted features, our model automatically learns features in a hierarchical fashion and predicts the saliency map in an end-to-end manner. DeepFix is designed to capture semantics at multiple scales while taking global context into account, by using network layers with very large receptive fields. Generally, fully convolutional nets are spatially invariant-this prevents them from modeling location-dependent patterns (e.g., centre-bias). Our network handles this by incorporating a novel location-biased convolutional layer. We evaluate our model on multiple challenging saliency data sets and show that it achieves the state-of-the-art results.

  15. Detection and diagnosis of colitis on computed tomography using deep convolutional neural networks.

    Science.gov (United States)

    Liu, Jiamin; Wang, David; Lu, Le; Wei, Zhuoshi; Kim, Lauren; Turkbey, Evrim B; Sahiner, Berkman; Petrick, Nicholas A; Summers, Ronald M

    2017-09-01

    Colitis refers to inflammation of the inner lining of the colon that is frequently associated with infection and allergic reactions. In this paper, we propose deep convolutional neural networks methods for lesion-level colitis detection and a support vector machine (SVM) classifier for patient-level colitis diagnosis on routine abdominal CT scans. The recently developed Faster Region-based Convolutional Neural Network (Faster RCNN) is utilized for lesion-level colitis detection. For each 2D slice, rectangular region proposals are generated by region proposal networks (RPN). Then, each region proposal is jointly classified and refined by a softmax classifier and bounding-box regressor. Two convolutional neural networks, eight layers of ZF net and 16 layers of VGG net are compared for colitis detection. Finally, for each patient, the detections on all 2D slices are collected and a SVM classifier is applied to develop a patient-level diagnosis. We trained and evaluated our method with 80 colitis patients and 80 normal cases using 4 × 4-fold cross validation. For lesion-level colitis detection, with ZF net, the mean of average precisions (mAP) were 48.7% and 50.9% for RCNN and Faster RCNN, respectively. The detection system achieved sensitivities of 51.4% and 54.0% at two false positives per patient for RCNN and Faster RCNN, respectively. With VGG net, Faster RCNN increased the mAP to 56.9% and increased the sensitivity to 58.4% at two false positive per patient. For patient-level colitis diagnosis, with ZF net, the average areas under the ROC curve (AUC) were 0.978 ± 0.009 and 0.984 ± 0.008 for RCNN and Faster RCNN method, respectively. The difference was not statistically significant with P = 0.18. At the optimal operating point, the RCNN method correctly identified 90.4% (72.3/80) of the colitis patients and 94.0% (75.2/80) of normal cases. The sensitivity improved to 91.6% (73.3/80) and the specificity improved to 95.0% (76.0/80) for the Faster RCNN

  16. On the Reduction of Computational Complexity of Deep Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Partha Maji

    2018-04-01

    Full Text Available Deep convolutional neural networks (ConvNets, which are at the heart of many new emerging applications, achieve remarkable performance in audio and visual recognition tasks. Unfortunately, achieving accuracy often implies significant computational costs, limiting deployability. In modern ConvNets it is typical for the convolution layers to consume the vast majority of computational resources during inference. This has made the acceleration of these layers an important research area in academia and industry. In this paper, we examine the effects of co-optimizing the internal structures of the convolutional layers and underlying implementation of fundamental convolution operation. We demonstrate that a combination of these methods can have a big impact on the overall speedup of a ConvNet, achieving a ten-fold increase over baseline. We also introduce a new class of fast one-dimensional (1D convolutions for ConvNets using the Toom–Cook algorithm. We show that our proposed scheme is mathematically well-grounded, robust, and does not require any time-consuming retraining, while still achieving speedups solely from convolutional layers with no loss in baseline accuracy.

  17. MR-based synthetic CT generation using a deep convolutional neural network method.

    Science.gov (United States)

    Han, Xiao

    2017-04-01

    Interests have been rapidly growing in the field of radiotherapy to replace CT with magnetic resonance imaging (MRI), due to superior soft tissue contrast offered by MRI and the desire to reduce unnecessary radiation dose. MR-only radiotherapy also simplifies clinical workflow and avoids uncertainties in aligning MR with CT. Methods, however, are needed to derive CT-equivalent representations, often known as synthetic CT (sCT), from patient MR images for dose calculation and DRR-based patient positioning. Synthetic CT estimation is also important for PET attenuation correction in hybrid PET-MR systems. We propose in this work a novel deep convolutional neural network (DCNN) method for sCT generation and evaluate its performance on a set of brain tumor patient images. The proposed method builds upon recent developments of deep learning and convolutional neural networks in the computer vision literature. The proposed DCNN model has 27 convolutional layers interleaved with pooling and unpooling layers and 35 million free parameters, which can be trained to learn a direct end-to-end mapping from MR images to their corresponding CTs. Training such a large model on our limited data is made possible through the principle of transfer learning and by initializing model weights from a pretrained model. Eighteen brain tumor patients with both CT and T1-weighted MR images are used as experimental data and a sixfold cross-validation study is performed. Each sCT generated is compared against the real CT image of the same patient on a voxel-by-voxel basis. Comparison is also made with respect to an atlas-based approach that involves deformable atlas registration and patch-based atlas fusion. The proposed DCNN method produced a mean absolute error (MAE) below 85 HU for 13 of the 18 test subjects. The overall average MAE was 84.8 ± 17.3 HU for all subjects, which was found to be significantly better than the average MAE of 94.5 ± 17.8 HU for the atlas-based method. The DCNN

  18. Segmentation of Drosophila Heart in Optical Coherence Microscopy Images Using Convolutional Neural Networks

    OpenAIRE

    Duan, Lian; Qin, Xi; He, Yuanhao; Sang, Xialin; Pan, Jinda; Xu, Tao; Men, Jing; Tanzi, Rudolph E.; Li, Airong; Ma, Yutao; Zhou, Chao

    2018-01-01

    Convolutional neural networks are powerful tools for image segmentation and classification. Here, we use this method to identify and mark the heart region of Drosophila at different developmental stages in the cross-sectional images acquired by a custom optical coherence microscopy (OCM) system. With our well-trained convolutional neural network model, the heart regions through multiple heartbeat cycles can be marked with an intersection over union (IOU) of ~86%. Various morphological and dyn...

  19. Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence

    Energy Technology Data Exchange (ETDEWEB)

    Pastawski, Fernando; Yoshida, Beni [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States); Harlow, Daniel [Princeton Center for Theoretical Science, Princeton University,400 Jadwin Hall, Princeton NJ 08540 (United States); Preskill, John [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States)

    2015-06-23

    We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindler-wedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in http://dx.doi.org/10.1007/JHEP04(2015)163.

  20. Adaptive Forward Error Correction for Energy Efficient Optical Transport Networks

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert

    2013-01-01

    In this paper we propose a novel scheme for on the fly code rate adjustment for forward error correcting (FEC) codes on optical links. The proposed scheme makes it possible to adjust the code rate independently for each optical frame. This allows for seamless rate adaption based on the link state...

  1. Histopathological Breast-Image Classification Using Local and Frequency Domains by Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Abdullah-Al Nahid

    2018-01-01

    Full Text Available Identification of the malignancy of tissues from Histopathological images has always been an issue of concern to doctors and radiologists. This task is time-consuming, tedious and moreover very challenging. Success in finding malignancy from Histopathological images primarily depends on long-term experience, though sometimes experts disagree on their decisions. However, Computer Aided Diagnosis (CAD techniques help the radiologist to give a second opinion that can increase the reliability of the radiologist’s decision. Among the different image analysis techniques, classification of the images has always been a challenging task. Due to the intense complexity of biomedical images, it is always very challenging to provide a reliable decision about an image. The state-of-the-art Convolutional Neural Network (CNN technique has had great success in natural image classification. Utilizing advanced engineering techniques along with the CNN, in this paper, we have classified a set of Histopathological Breast-Cancer (BC images utilizing a state-of-the-art CNN model containing a residual block. Conventional CNN operation takes raw images as input and extracts the global features; however, the object oriented local features also contain significant information—for example, the Local Binary Pattern (LBP represents the effective textural information, Histogram represent the pixel strength distribution, Contourlet Transform (CT gives much detailed information about the smoothness about the edges, and Discrete Fourier Transform (DFT derives frequency-domain information from the image. Utilizing these advantages, along with our proposed novel CNN model, we have examined the performance of the novel CNN model as Histopathological image classifier. To do so, we have introduced five cases: (a Convolutional Neural Network Raw Image (CNN-I; (b Convolutional Neural Network CT Histogram (CNN-CH; (c Convolutional Neural Network CT LBP (CNN-CL; (d Convolutional

  2. Attenuation correction for brain PET imaging using deep neural network based on dixon and ZTE MR images.

    Science.gov (United States)

    Gong, Kuang; Yang, Jaewon; Kim, Kyungsang; El Fakhri, Georges; Seo, Youngho; Li, Quanzheng

    2018-05-23

    Positron Emission Tomography (PET) is a functional imaging modality widely used in neuroscience studies. To obtain meaningful quantitative results from PET images, attenuation correction is necessary during image reconstruction. For PET/MR hybrid systems, PET attenuation is challenging as Magnetic Resonance (MR) images do not reflect attenuation coefficients directly. To address this issue, we present deep neural network methods to derive the continuous attenuation coefficients for brain PET imaging from MR images. With only Dixon MR images as the network input, the existing U-net structure was adopted and analysis using forty patient data sets shows it is superior than other Dixon based methods. When both Dixon and zero echo time (ZTE) images are available, we have proposed a modified U-net structure, named GroupU-net, to efficiently make use of both Dixon and ZTE information through group convolution modules when the network goes deeper. Quantitative analysis based on fourteen real patient data sets demonstrates that both network approaches can perform better than the standard methods, and the proposed network structure can further reduce the PET quantification error compared to the U-net structure. © 2018 Institute of Physics and Engineering in Medicine.

  3. Combining morphometric features and convolutional networks fusion for glaucoma diagnosis

    Science.gov (United States)

    Perdomo, Oscar; Arevalo, John; González, Fabio A.

    2017-11-01

    Glaucoma is an eye condition that leads to loss of vision and blindness. Ophthalmoscopy exam evaluates the shape, color and proportion between the optic disc and physiologic cup, but the lack of agreement among experts is still the main diagnosis problem. The application of deep convolutional neural networks combined with automatic extraction of features such as: the cup-to-disc distance in the four quadrants, the perimeter, area, eccentricity, the major radio, the minor radio in optic disc and cup, in addition to all the ratios among the previous parameters may help with a better automatic grading of glaucoma. This paper presents a strategy to merge morphological features and deep convolutional neural networks as a novel methodology to support the glaucoma diagnosis in eye fundus images.

  4. Relative location prediction in CT scan images using convolutional neural networks.

    Science.gov (United States)

    Guo, Jiajia; Du, Hongwei; Zhu, Jianyue; Yan, Ting; Qiu, Bensheng

    2018-07-01

    Relative location prediction in computed tomography (CT) scan images is a challenging problem. Many traditional machine learning methods have been applied in attempts to alleviate this problem. However, the accuracy and speed of these methods cannot meet the requirement of medical scenario. In this paper, we propose a regression model based on one-dimensional convolutional neural networks (CNN) to determine the relative location of a CT scan image both quickly and precisely. In contrast to other common CNN models that use a two-dimensional image as an input, the input of this CNN model is a feature vector extracted by a shape context algorithm with spatial correlation. Normalization via z-score is first applied as a pre-processing step. Then, in order to prevent overfitting and improve model's performance, 20% of the elements of the feature vectors are randomly set to zero. This CNN model consists primarily of three one-dimensional convolutional layers, three dropout layers and two fully-connected layers with appropriate loss functions. A public dataset is employed to validate the performance of the proposed model using a 5-fold cross validation. Experimental results demonstrate an excellent performance of the proposed model when compared with contemporary techniques, achieving a median absolute error of 1.04 cm and mean absolute error of 1.69 cm. The time taken for each relative location prediction is approximately 2 ms. Results indicate that the proposed CNN method can contribute to a quick and accurate relative location prediction in CT scan images, which can improve efficiency of the medical picture archiving and communication system in the future. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Target recognition based on convolutional neural network

    Science.gov (United States)

    Wang, Liqiang; Wang, Xin; Xi, Fubiao; Dong, Jian

    2017-11-01

    One of the important part of object target recognition is the feature extraction, which can be classified into feature extraction and automatic feature extraction. The traditional neural network is one of the automatic feature extraction methods, while it causes high possibility of over-fitting due to the global connection. The deep learning algorithm used in this paper is a hierarchical automatic feature extraction method, trained with the layer-by-layer convolutional neural network (CNN), which can extract the features from lower layers to higher layers. The features are more discriminative and it is beneficial to the object target recognition.

  6. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Deep Galaxy: Classification of Galaxies based on Deep Convolutional Neural Networks

    OpenAIRE

    Khalifa, Nour Eldeen M.; Taha, Mohamed Hamed N.; Hassanien, Aboul Ella; Selim, I. M.

    2017-01-01

    In this paper, a deep convolutional neural network architecture for galaxies classification is presented. The galaxy can be classified based on its features into main three categories Elliptical, Spiral, and Irregular. The proposed deep galaxies architecture consists of 8 layers, one main convolutional layer for features extraction with 96 filters, followed by two principles fully connected layers for classification. It is trained over 1356 images and achieved 97.272% in testing accuracy. A c...

  8. Classification of stroke disease using convolutional neural network

    Science.gov (United States)

    Marbun, J. T.; Seniman; Andayani, U.

    2018-03-01

    Stroke is a condition that occurs when the blood supply stop flowing to the brain because of a blockage or a broken blood vessel. A symptoms that happen when experiencing stroke, some of them is a dropped consciousness, disrupted vision and paralyzed body. The general examination is being done to get a picture of the brain part that have stroke using Computerized Tomography (CT) Scan. The image produced from CT will be manually checked and need a proper lighting by doctor to get a type of stroke. That is why it needs a method to classify stroke from CT image automatically. A method proposed in this research is Convolutional Neural Network. CT image of the brain is used as the input for image processing. The stage before classification are image processing (Grayscaling, Scaling, Contrast Limited Adaptive Histogram Equalization, then the image being classified with Convolutional Neural Network. The result then showed that the method significantly conducted was able to be used as a tool to classify stroke disease in order to distinguish the type of stroke from CT image.

  9. Multi-Input Convolutional Neural Network for Flower Grading

    Directory of Open Access Journals (Sweden)

    Yu Sun

    2017-01-01

    Full Text Available Flower grading is a significant task because it is extremely convenient for managing the flowers in greenhouse and market. With the development of computer vision, flower grading has become an interdisciplinary focus in both botany and computer vision. A new dataset named BjfuGloxinia contains three quality grades; each grade consists of 107 samples and 321 images. A multi-input convolutional neural network is designed for large scale flower grading. Multi-input CNN achieves a satisfactory accuracy of 89.6% on the BjfuGloxinia after data augmentation. Compared with a single-input CNN, the accuracy of multi-input CNN is increased by 5% on average, demonstrating that multi-input convolutional neural network is a promising model for flower grading. Although data augmentation contributes to the model, the accuracy is still limited by lack of samples diversity. Majority of misclassification is derived from the medium class. The image processing based bud detection is useful for reducing the misclassification, increasing the accuracy of flower grading to approximately 93.9%.

  10. Convolutional Neural Network for Histopathological Analysis of Osteosarcoma.

    Science.gov (United States)

    Mishra, Rashika; Daescu, Ovidiu; Leavey, Patrick; Rakheja, Dinesh; Sengupta, Anita

    2018-03-01

    Pathologists often deal with high complexity and sometimes disagreement over osteosarcoma tumor classification due to cellular heterogeneity in the dataset. Segmentation and classification of histology tissue in H&E stained tumor image datasets is a challenging task because of intra-class variations, inter-class similarity, crowded context, and noisy data. In recent years, deep learning approaches have led to encouraging results in breast cancer and prostate cancer analysis. In this article, we propose convolutional neural network (CNN) as a tool to improve efficiency and accuracy of osteosarcoma tumor classification into tumor classes (viable tumor, necrosis) versus nontumor. The proposed CNN architecture contains eight learned layers: three sets of stacked two convolutional layers interspersed with max pooling layers for feature extraction and two fully connected layers with data augmentation strategies to boost performance. The use of a neural network results in higher accuracy of average 92% for the classification. We compare the proposed architecture with three existing and proven CNN architectures for image classification: AlexNet, LeNet, and VGGNet. We also provide a pipeline to calculate percentage necrosis in a given whole slide image. We conclude that the use of neural networks can assure both high accuracy and efficiency in osteosarcoma classification.

  11. Efficient airport detection using region-based fully convolutional neural networks

    Science.gov (United States)

    Xin, Peng; Xu, Yuelei; Zhang, Xulei; Ma, Shiping; Li, Shuai; Lv, Chao

    2018-04-01

    This paper presents a model for airport detection using region-based fully convolutional neural networks. To achieve fast detection with high accuracy, we shared the conv layers between the region proposal procedure and the airport detection procedure and used graphics processing units (GPUs) to speed up the training and testing time. For lack of labeled data, we transferred the convolutional layers of ZF net pretrained by ImageNet to initialize the shared convolutional layers, then we retrained the model using the alternating optimization training strategy. The proposed model has been tested on an airport dataset consisting of 600 images. Experiments show that the proposed method can distinguish airports in our dataset from similar background scenes almost real-time with high accuracy, which is much better than traditional methods.

  12. Dimensionality-varied convolutional neural network for spectral-spatial classification of hyperspectral data

    Science.gov (United States)

    Liu, Wanjun; Liang, Xuejian; Qu, Haicheng

    2017-11-01

    Hyperspectral image (HSI) classification is one of the most popular topics in remote sensing community. Traditional and deep learning-based classification methods were proposed constantly in recent years. In order to improve the classification accuracy and robustness, a dimensionality-varied convolutional neural network (DVCNN) was proposed in this paper. DVCNN was a novel deep architecture based on convolutional neural network (CNN). The input of DVCNN was a set of 3D patches selected from HSI which contained spectral-spatial joint information. In the following feature extraction process, each patch was transformed into some different 1D vectors by 3D convolution kernels, which were able to extract features from spectral-spatial data. The rest of DVCNN was about the same as general CNN and processed 2D matrix which was constituted by by all 1D data. So that the DVCNN could not only extract more accurate and rich features than CNN, but also fused spectral-spatial information to improve classification accuracy. Moreover, the robustness of network on water-absorption bands was enhanced in the process of spectral-spatial fusion by 3D convolution, and the calculation was simplified by dimensionality varied convolution. Experiments were performed on both Indian Pines and Pavia University scene datasets, and the results showed that the classification accuracy of DVCNN improved by 32.87% on Indian Pines and 19.63% on Pavia University scene than spectral-only CNN. The maximum accuracy improvement of DVCNN achievement was 13.72% compared with other state-of-the-art HSI classification methods, and the robustness of DVCNN on water-absorption bands noise was demonstrated.

  13. Seismic signal auto-detecing from different features by using Convolutional Neural Network

    Science.gov (United States)

    Huang, Y.; Zhou, Y.; Yue, H.; Zhou, S.

    2017-12-01

    We try Convolutional Neural Network to detect some features of seismic data and compare their efficience. The features include whether a signal is seismic signal or noise and the arrival time of P and S phase and each feature correspond to a Convolutional Neural Network. We first use traditional STA/LTA to recongnize some events and then use templete matching to find more events as training set for the Neural Network. To make the training set more various, we add some noise to the seismic data and make some synthetic seismic data and noise. The 3-component raw signal and time-frequancy ananlyze are used as the input data for our neural network. Our Training is performed on GPUs to achieve efficient convergence. Our method improved the precision in comparison with STA/LTA and template matching. We will move to recurrent neural network to see if this kind network is better in detect P and S phase.

  14. Very deep recurrent convolutional neural network for object recognition

    Science.gov (United States)

    Brahimi, Sourour; Ben Aoun, Najib; Ben Amar, Chokri

    2017-03-01

    In recent years, Computer vision has become a very active field. This field includes methods for processing, analyzing, and understanding images. The most challenging problems in computer vision are image classification and object recognition. This paper presents a new approach for object recognition task. This approach exploits the success of the Very Deep Convolutional Neural Network for object recognition. In fact, it improves the convolutional layers by adding recurrent connections. This proposed approach was evaluated on two object recognition benchmarks: Pascal VOC 2007 and CIFAR-10. The experimental results prove the efficiency of our method in comparison with the state of the art methods.

  15. Robust Vehicle Detection in Aerial Images Based on Cascaded Convolutional Neural Networks.

    Science.gov (United States)

    Zhong, Jiandan; Lei, Tao; Yao, Guangle

    2017-11-24

    Vehicle detection in aerial images is an important and challenging task. Traditionally, many target detection models based on sliding-window fashion were developed and achieved acceptable performance, but these models are time-consuming in the detection phase. Recently, with the great success of convolutional neural networks (CNNs) in computer vision, many state-of-the-art detectors have been designed based on deep CNNs. However, these CNN-based detectors are inefficient when applied in aerial image data due to the fact that the existing CNN-based models struggle with small-size object detection and precise localization. To improve the detection accuracy without decreasing speed, we propose a CNN-based detection model combining two independent convolutional neural networks, where the first network is applied to generate a set of vehicle-like regions from multi-feature maps of different hierarchies and scales. Because the multi-feature maps combine the advantage of the deep and shallow convolutional layer, the first network performs well on locating the small targets in aerial image data. Then, the generated candidate regions are fed into the second network for feature extraction and decision making. Comprehensive experiments are conducted on the Vehicle Detection in Aerial Imagery (VEDAI) dataset and Munich vehicle dataset. The proposed cascaded detection model yields high performance, not only in detection accuracy but also in detection speed.

  16. Automatic classification of ovarian cancer types from cytological images using deep convolutional neural networks.

    Science.gov (United States)

    Wu, Miao; Yan, Chuanbo; Liu, Huiqiang; Liu, Qian

    2018-06-29

    Ovarian cancer is one of the most common gynecologic malignancies. Accurate classification of ovarian cancer types (serous carcinoma, mucous carcinoma, endometrioid carcinoma, transparent cell carcinoma) is an essential part in the different diagnosis. Computer-aided diagnosis (CADx) can provide useful advice for pathologists to determine the diagnosis correctly. In our study, we employed a Deep Convolutional Neural Networks (DCNN) based on AlexNet to automatically classify the different types of ovarian cancers from cytological images. The DCNN consists of five convolutional layers, three max pooling layers, and two full reconnect layers. Then we trained the model by two group input data separately, one was original image data and the other one was augmented image data including image enhancement and image rotation. The testing results are obtained by the method of 10-fold cross-validation, showing that the accuracy of classification models has been improved from 72.76 to 78.20% by using augmented images as training data. The developed scheme was useful for classifying ovarian cancers from cytological images. © 2018 The Author(s).

  17. Color encoding in biologically-inspired convolutional neural networks.

    Science.gov (United States)

    Rafegas, Ivet; Vanrell, Maria

    2018-05-11

    Convolutional Neural Networks have been proposed as suitable frameworks to model biological vision. Some of these artificial networks showed representational properties that rival primate performances in object recognition. In this paper we explore how color is encoded in a trained artificial network. It is performed by estimating a color selectivity index for each neuron, which allows us to describe the neuron activity to a color input stimuli. The index allows us to classify whether they are color selective or not and if they are of a single or double color. We have determined that all five convolutional layers of the network have a large number of color selective neurons. Color opponency clearly emerges in the first layer, presenting 4 main axes (Black-White, Red-Cyan, Blue-Yellow and Magenta-Green), but this is reduced and rotated as we go deeper into the network. In layer 2 we find a denser hue sampling of color neurons and opponency is reduced almost to one new main axis, the Bluish-Orangish coinciding with the dataset bias. In layers 3, 4 and 5 color neurons are similar amongst themselves, presenting different type of neurons that detect specific colored objects (e.g., orangish faces), specific surrounds (e.g., blue sky) or specific colored or contrasted object-surround configurations (e.g. blue blob in a green surround). Overall, our work concludes that color and shape representation are successively entangled through all the layers of the studied network, revealing certain parallelisms with the reported evidences in primate brains that can provide useful insight into intermediate hierarchical spatio-chromatic representations. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Deformable image registration using convolutional neural networks

    Science.gov (United States)

    Eppenhof, Koen A. J.; Lafarge, Maxime W.; Moeskops, Pim; Veta, Mitko; Pluim, Josien P. W.

    2018-03-01

    Deformable image registration can be time-consuming and often needs extensive parameterization to perform well on a specific application. We present a step towards a registration framework based on a three-dimensional convolutional neural network. The network directly learns transformations between pairs of three-dimensional images. The outputs of the network are three maps for the x, y, and z components of a thin plate spline transformation grid. The network is trained on synthetic random transformations, which are applied to a small set of representative images for the desired application. Training therefore does not require manually annotated ground truth deformation information. The methodology is demonstrated on public data sets of inspiration-expiration lung CT image pairs, which come with annotated corresponding landmarks for evaluation of the registration accuracy. Advantages of this methodology are its fast registration times and its minimal parameterization.

  19. Air Temperature Error Correction Based on Solar Radiation in an Economical Meteorological Wireless Sensor Network.

    Science.gov (United States)

    Sun, Xingming; Yan, Shuangshuang; Wang, Baowei; Xia, Li; Liu, Qi; Zhang, Hui

    2015-07-24

    Air temperature (AT) is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS). Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR). Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE) and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months.

  20. Air Temperature Error Correction Based on Solar Radiation in an Economical Meteorological Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Xingming Sun

    2015-07-01

    Full Text Available Air temperature (AT is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS. Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR. Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months.

  1. Korean letter handwritten recognition using deep convolutional neural network on android platform

    Science.gov (United States)

    Purnamawati, S.; Rachmawati, D.; Lumanauw, G.; Rahmat, R. F.; Taqyuddin, R.

    2018-03-01

    Currently, popularity of Korean culture attracts many people to learn everything about Korea, particularly its language. To acquire Korean Language, every single learner needs to be able to understand Korean non-Latin character. A digital approach needs to be carried out in order to make Korean learning process easier. This study is done by using Deep Convolutional Neural Network (DCNN). DCNN performs the recognition process on the image based on the model that has been trained such as Inception-v3 Model. Subsequently, re-training process using transfer learning technique with the trained and re-trained value of model is carried though in order to develop a new model with a better performance without any specific systemic errors. The testing accuracy of this research results in 86,9%.

  2. Real-time object tracking system based on field-programmable gate array and convolution neural network

    Directory of Open Access Journals (Sweden)

    Congyi Lyu

    2016-12-01

    Full Text Available Vision-based object tracking has lots of applications in robotics, like surveillance, navigation, motion capturing, and so on. However, the existing object tracking systems still suffer from the challenging problem of high computation consumption in the image processing algorithms. The problem can prevent current systems from being used in many robotic applications which have limitations of payload and power, for example, micro air vehicles. In these applications, the central processing unit- or graphics processing unit-based computers are not good choices due to the high weight and power consumption. To address the problem, this article proposed a real-time object tracking system based on field-programmable gate array, convolution neural network, and visual servo technology. The time-consuming image processing algorithms, such as distortion correction, color space convertor, and Sobel edge, Harris corner features detector, and convolution neural network were redesigned using the programmable gates in field-programmable gate array. Based on the field-programmable gate array-based image processing, an image-based visual servo controller was designed to drive a two degree of freedom manipulator to track the target in real time. Finally, experiments on the proposed system were performed to illustrate the effectiveness of the real-time object tracking system.

  3. Improved Iterative Decoding of Network-Channel Codes for Multiple-Access Relay Channel.

    Science.gov (United States)

    Majumder, Saikat; Verma, Shrish

    2015-01-01

    Cooperative communication using relay nodes is one of the most effective means of exploiting space diversity for low cost nodes in wireless network. In cooperative communication, users, besides communicating their own information, also relay the information of other users. In this paper we investigate a scheme where cooperation is achieved using a common relay node which performs network coding to provide space diversity for two information nodes transmitting to a base station. We propose a scheme which uses Reed-Solomon error correcting code for encoding the information bit at the user nodes and convolutional code as network code, instead of XOR based network coding. Based on this encoder, we propose iterative soft decoding of joint network-channel code by treating it as a concatenated Reed-Solomon convolutional code. Simulation results show significant improvement in performance compared to existing scheme based on compound codes.

  4. Random access to mobile networks with advanced error correction

    Science.gov (United States)

    Dippold, Michael

    1990-01-01

    A random access scheme for unreliable data channels is investigated in conjunction with an adaptive Hybrid-II Automatic Repeat Request (ARQ) scheme using Rate Compatible Punctured Codes (RCPC) Forward Error Correction (FEC). A simple scheme with fixed frame length and equal slot sizes is chosen and reservation is implicit by the first packet transmitted randomly in a free slot, similar to Reservation Aloha. This allows the further transmission of redundancy if the last decoding attempt failed. Results show that a high channel utilization and superior throughput can be achieved with this scheme that shows a quite low implementation complexity. For the example of an interleaved Rayleigh channel and soft decision utilization and mean delay are calculated. A utilization of 40 percent may be achieved for a frame with the number of slots being equal to half the station number under high traffic load. The effects of feedback channel errors and some countermeasures are discussed.

  5. Beyond hypercorrection: remembering corrective feedback for low-confidence errors.

    Science.gov (United States)

    Griffiths, Lauren; Higham, Philip A

    2018-02-01

    Correcting errors based on corrective feedback is essential to successful learning. Previous studies have found that corrections to high-confidence errors are better remembered than low-confidence errors (the hypercorrection effect). The aim of this study was to investigate whether corrections to low-confidence errors can also be successfully retained in some cases. Participants completed an initial multiple-choice test consisting of control, trick and easy general-knowledge questions, rated their confidence after answering each question, and then received immediate corrective feedback. After a short delay, they were given a cued-recall test consisting of the same questions. In two experiments, we found high-confidence errors to control questions were better corrected on the second test compared to low-confidence errors - the typical hypercorrection effect. However, low-confidence errors to trick questions were just as likely to be corrected as high-confidence errors. Most surprisingly, we found that memory for the feedback and original responses, not confidence or surprise, were significant predictors of error correction. We conclude that for some types of material, there is an effortful process of elaboration and problem solving prior to making low-confidence errors that facilitates memory of corrective feedback.

  6. Classifying images using restricted Boltzmann machines and convolutional neural networks

    Science.gov (United States)

    Zhao, Zhijun; Xu, Tongde; Dai, Chenyu

    2017-07-01

    To improve the feature recognition ability of deep model transfer learning, we propose a hybrid deep transfer learning method for image classification based on restricted Boltzmann machines (RBM) and convolutional neural networks (CNNs). It integrates learning abilities of two models, which conducts subject classification by exacting structural higher-order statistics features of images. While the method transfers the trained convolutional neural networks to the target datasets, fully-connected layers can be replaced by restricted Boltzmann machine layers; then the restricted Boltzmann machine layers and Softmax classifier are retrained, and BP neural network can be used to fine-tuned the hybrid model. The restricted Boltzmann machine layers has not only fully integrated the whole feature maps, but also learns the statistical features of target datasets in the view of the biggest logarithmic likelihood, thus removing the effects caused by the content differences between datasets. The experimental results show that the proposed method has improved the accuracy of image classification, outperforming other methods on Pascal VOC2007 and Caltech101 datasets.

  7. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  8. Deep learning for steganalysis via convolutional neural networks

    Science.gov (United States)

    Qian, Yinlong; Dong, Jing; Wang, Wei; Tan, Tieniu

    2015-03-01

    Current work on steganalysis for digital images is focused on the construction of complex handcrafted features. This paper proposes a new paradigm for steganalysis to learn features automatically via deep learning models. We novelly propose a customized Convolutional Neural Network for steganalysis. The proposed model can capture the complex dependencies that are useful for steganalysis. Compared with existing schemes, this model can automatically learn feature representations with several convolutional layers. The feature extraction and classification steps are unified under a single architecture, which means the guidance of classification can be used during the feature extraction step. We demonstrate the effectiveness of the proposed model on three state-of-theart spatial domain steganographic algorithms - HUGO, WOW, and S-UNIWARD. Compared to the Spatial Rich Model (SRM), our model achieves comparable performance on BOSSbase and the realistic and large ImageNet database.

  9. Convolutional Encoder and Viterbi Decoder Using SOPC For Variable Constraint Length

    DEFF Research Database (Denmark)

    Kulkarni, Anuradha; Dnyaneshwar, Mantri; Prasad, Neeli R.

    2013-01-01

    Convolution encoder and Viterbi decoder are the basic and important blocks in any Code Division Multiple Accesses (CDMA). They are widely used in communication system due to their error correcting capability But the performance degrades with variable constraint length. In this context to have...... detailed analysis, this paper deals with the implementation of convolution encoder and Viterbi decoder using system on programming chip (SOPC). It uses variable constraint length of 7, 8 and 9 bits for 1/2 and 1/3 code rates. By analyzing the Viterbi algorithm it is seen that our algorithm has a better...

  10. Convolutional Neural Networks for Human Activity Recognition Using Body-Worn Sensors

    Directory of Open Access Journals (Sweden)

    Fernando Moya Rueda

    2018-05-01

    Full Text Available Human activity recognition (HAR is a classification task for recognizing human movements. Methods of HAR are of great interest as they have become tools for measuring occurrences and durations of human actions, which are the basis of smart assistive technologies and manual processes analysis. Recently, deep neural networks have been deployed for HAR in the context of activities of daily living using multichannel time-series. These time-series are acquired from body-worn devices, which are composed of different types of sensors. The deep architectures process these measurements for finding basic and complex features in human corporal movements, and for classifying them into a set of human actions. As the devices are worn at different parts of the human body, we propose a novel deep neural network for HAR. This network handles sequence measurements from different body-worn devices separately. An evaluation of the architecture is performed on three datasets, the Oportunity, Pamap2, and an industrial dataset, outperforming the state-of-the-art. In addition, different network configurations will also be evaluated. We find that applying convolutions per sensor channel and per body-worn device improves the capabilities of convolutional neural network (CNNs.

  11. Deep-Learning Convolutional Neural Networks Accurately Classify Genetic Mutations in Gliomas.

    Science.gov (United States)

    Chang, P; Grinband, J; Weinberg, B D; Bardis, M; Khy, M; Cadena, G; Su, M-Y; Cha, S; Filippi, C G; Bota, D; Baldi, P; Poisson, L M; Jain, R; Chow, D

    2018-05-10

    The World Health Organization has recently placed new emphasis on the integration of genetic information for gliomas. While tissue sampling remains the criterion standard, noninvasive imaging techniques may provide complimentary insight into clinically relevant genetic mutations. Our aim was to train a convolutional neural network to independently predict underlying molecular genetic mutation status in gliomas with high accuracy and identify the most predictive imaging features for each mutation. MR imaging data and molecular information were retrospectively obtained from The Cancer Imaging Archives for 259 patients with either low- or high-grade gliomas. A convolutional neural network was trained to classify isocitrate dehydrogenase 1 ( IDH1 ) mutation status, 1p/19q codeletion, and O6-methylguanine-DNA methyltransferase ( MGMT ) promotor methylation status. Principal component analysis of the final convolutional neural network layer was used to extract the key imaging features critical for successful classification. Classification had high accuracy: IDH1 mutation status, 94%; 1p/19q codeletion, 92%; and MGMT promotor methylation status, 83%. Each genetic category was also associated with distinctive imaging features such as definition of tumor margins, T1 and FLAIR suppression, extent of edema, extent of necrosis, and textural features. Our results indicate that for The Cancer Imaging Archives dataset, machine-learning approaches allow classification of individual genetic mutations of both low- and high-grade gliomas. We show that relevant MR imaging features acquired from an added dimensionality-reduction technique demonstrate that neural networks are capable of learning key imaging components without prior feature selection or human-directed training. © 2018 by American Journal of Neuroradiology.

  12. Opportunistic Error Correction for WLAN Applications

    NARCIS (Netherlands)

    Shao, X.; Schiphorst, Roelof; Slump, Cornelis H.

    2008-01-01

    The current error correction layer of IEEE 802.11a WLAN is designed for worst case scenarios, which often do not apply. In this paper, we propose a new opportunistic error correction layer based on Fountain codes and a resolution adaptive ADC. The key part in the new proposed system is that only

  13. Correction of the tip convolution effects in the imaging of nanostructures studied through scanning force microscopy

    International Nuclear Information System (INIS)

    Canet-Ferrer, Josep; Coronado, Eugenio; Forment-Aliaga, Alicia; Pinilla-Cienfuegos, Elena

    2014-01-01

    AFM images are always affected by artifacts arising from tip convolution effects, resulting in a decrease in the lateral resolution of this technique. The magnitude of such effects is described by means of geometrical considerations, thereby providing better understanding of the convolution phenomenon. We demonstrate that for a constant tip radius, the convolution error is increased with the object height, mainly for the narrowest motifs. Certain influence of the object shape is observed between rectangular and elliptical objects with the same height. Such moderate differences are essentially expected among elongated objects; in contrast they are reduced as the object aspect ratio is increased. Finally, we propose an algorithm to study the influence of the size, shape and aspect ratio of different nanometric motifs on a flat substrate. Indeed, with this algorithm, convolution artifacts can be extended to any kind of motif including real surface roughness. From the simulation results we demonstrate that in most cases the real motif’s width can be estimated from AFM images without knowing its shape in detail. (paper)

  14. Training Convolutional Neural Networks for Translational Invariance on SAR ATR

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Engholm, Rasmus; Østergaard Pedersen, Morten

    2016-01-01

    In this paper we present a comparison of the robustness of Convolutional Neural Networks (CNN) to other classifiers in the presence of uncertainty of the objects localization in SAR image. We present a framework for simulating simple SAR images, translating the object of interest systematically...

  15. Design and Implementation of Convolutional Encoder and Viterbi Decoder Using FPGA.

    Directory of Open Access Journals (Sweden)

    Riham Ali Zbaid

    2018-01-01

    Full Text Available Keeping  the  fineness of data is the most significant thing in communication.There are many factors that affect the accuracy of the data when it is transmitted over the communication channel such as noise etc. to overcome these effects are encoding channels encryption.In this paper is used for one type of channel coding is convolutional codes. Convolution encoding is a Forward Error Correction (FEC method used in incessant one-way and real time communication links .It can offer a great development in the error bit rates so that small, low energy, and devices cheap transmission when used in applications such as satellites. In this paper highlight the design, simulation and implementation of convolution encoder and Viterbi decoder by using MATLAB- program (2011. SIMULINK HDL coder is used to convert MATLAB-SIMULINK models to VHDL using plates Altera Cyclone II code DE2-70. Simulation and evaluation of the implementation of the results coincided with the results of the design show the coinciding with the designed results.

  16. Convolutional neural networks for transient candidate vetting in large-scale surveys

    Science.gov (United States)

    Gieseke, Fabian; Bloemen, Steven; van den Bogaard, Cas; Heskes, Tom; Kindler, Jonas; Scalzo, Richard A.; Ribeiro, Valério A. R. M.; van Roestel, Jan; Groot, Paul J.; Yuan, Fang; Möller, Anais; Tucker, Brad E.

    2017-12-01

    Current synoptic sky surveys monitor large areas of the sky to find variable and transient astronomical sources. As the number of detections per night at a single telescope easily exceeds several thousand, current detection pipelines make intensive use of machine learning algorithms to classify the detected objects and to filter out the most interesting candidates. A number of upcoming surveys will produce up to three orders of magnitude more data, which renders high-precision classification systems essential to reduce the manual and, hence, expensive vetting by human experts. We present an approach based on convolutional neural networks to discriminate between true astrophysical sources and artefacts in reference-subtracted optical images. We show that relatively simple networks are already competitive with state-of-the-art systems and that their quality can further be improved via slightly deeper networks and additional pre-processing steps - eventually yielding models outperforming state-of-the-art systems. In particular, our best model correctly classifies about 97.3 per cent of all 'real' and 99.7 per cent of all 'bogus' instances on a test set containing 1942 'bogus' and 227 'real' instances in total. Furthermore, the networks considered in this work can also successfully classify these objects at hand without relying on difference images, which might pave the way for future detection pipelines not containing image subtraction steps at all.

  17. Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics

    International Nuclear Information System (INIS)

    Sarovar, Mohan; Young, Kevin C

    2013-01-01

    While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC. (paper)

  18. End-to-end unsupervised deformable image registration with a convolutional neural network

    NARCIS (Netherlands)

    de Vos, Bob D.; Berendsen, Floris; Viergever, Max A.; Staring, Marius; Išgum, Ivana

    2017-01-01

    In this work we propose a deep learning network for deformable image registration (DIRNet). The DIRNet consists of a convolutional neural network (ConvNet) regressor, a spatial transformer, and a resampler. The ConvNet analyzes a pair of fixed and moving images and outputs parameters for the spatial

  19. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  20. Brain tumor segmentation in multi-spectral MRI using convolutional neural networks (CNN).

    Science.gov (United States)

    Iqbal, Sajid; Ghani, M Usman; Saba, Tanzila; Rehman, Amjad

    2018-04-01

    A tumor could be found in any area of the brain and could be of any size, shape, and contrast. There may exist multiple tumors of different types in a human brain at the same time. Accurate tumor area segmentation is considered primary step for treatment of brain tumors. Deep Learning is a set of promising techniques that could provide better results as compared to nondeep learning techniques for segmenting timorous part inside a brain. This article presents a deep convolutional neural network (CNN) to segment brain tumors in MRIs. The proposed network uses BRATS segmentation challenge dataset which is composed of images obtained through four different modalities. Accordingly, we present an extended version of existing network to solve segmentation problem. The network architecture consists of multiple neural network layers connected in sequential order with the feeding of Convolutional feature maps at the peer level. Experimental results on BRATS 2015 benchmark data thus show the usability of the proposed approach and its superiority over the other approaches in this area of research. © 2018 Wiley Periodicals, Inc.

  1. Defect detection and classification of galvanized stamping parts based on fully convolution neural network

    Science.gov (United States)

    Xiao, Zhitao; Leng, Yanyi; Geng, Lei; Xi, Jiangtao

    2018-04-01

    In this paper, a new convolution neural network method is proposed for the inspection and classification of galvanized stamping parts. Firstly, all workpieces are divided into normal and defective by image processing, and then the defective workpieces extracted from the region of interest (ROI) area are input to the trained fully convolutional networks (FCN). The network utilizes an end-to-end and pixel-to-pixel training convolution network that is currently the most advanced technology in semantic segmentation, predicts result of each pixel. Secondly, we mark the different pixel values of the workpiece, defect and background for the training image, and use the pixel value and the number of pixels to realize the recognition of the defects of the output picture. Finally, the defect area's threshold depended on the needs of the project is set to achieve the specific classification of the workpiece. The experiment results show that the proposed method can successfully achieve defect detection and classification of galvanized stamping parts under ordinary camera and illumination conditions, and its accuracy can reach 99.6%. Moreover, it overcomes the problem of complex image preprocessing and difficult feature extraction and performs better adaptability.

  2. Siamese convolutional networks for tracking the spine motion

    Science.gov (United States)

    Liu, Yuan; Sui, Xiubao; Sun, Yicheng; Liu, Chengwei; Hu, Yong

    2017-09-01

    Deep learning models have demonstrated great success in various computer vision tasks such as image classification and object tracking. However, tracking the lumbar spine by digitalized video fluoroscopic imaging (DVFI), which can quantitatively analyze the motion mode of spine to diagnose lumbar instability, has not yet been well developed due to the lack of steady and robust tracking method. In this paper, we propose a novel visual tracking algorithm of the lumbar vertebra motion based on a Siamese convolutional neural network (CNN) model. We train a full-convolutional neural network offline to learn generic image features. The network is trained to learn a similarity function that compares the labeled target in the first frame with the candidate patches in the current frame. The similarity function returns a high score if the two images depict the same object. Once learned, the similarity function is used to track a previously unseen object without any adapting online. In the current frame, our tracker is performed by evaluating the candidate rotated patches sampled around the previous frame target position and presents a rotated bounding box to locate the predicted target precisely. Results indicate that the proposed tracking method can detect the lumbar vertebra steadily and robustly. Especially for images with low contrast and cluttered background, the presented tracker can still achieve good tracking performance. Further, the proposed algorithm operates at high speed for real time tracking.

  3. Open quantum systems and error correction

    Science.gov (United States)

    Shabani Barzegar, Alireza

    Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC

  4. Characterizing the velocity of a wandering black hole and properties of the surrounding medium using convolutional neural networks

    Science.gov (United States)

    González, J. A.; Guzmán, F. S.

    2018-03-01

    We present a method for estimating the velocity of a wandering black hole and the equation of state for the gas around it based on a catalog of numerical simulations. The method uses machine-learning methods based on convolutional neural networks applied to the classification of images resulting from numerical simulations. Specifically we focus on the supersonic velocity regime and choose the direction of the black hole to be parallel to its spin. We build a catalog of 900 simulations by numerically solving Euler's equations onto the fixed space-time background of a black hole, for two parameters: the adiabatic index Γ with values in the range [1.1, 5 /3 ], and the asymptotic relative velocity of the black hole with respect to the surroundings v∞, with values within [0.2 ,0.8 ]c . For each simulation we produce a 2D image of the gas density once the process of accretion has approached a stationary regime. The results obtained show that the implemented convolutional neural networks are able to correctly classify the adiabatic index 87.78% of the time within an uncertainty of ±0.0284 , while the prediction of the velocity is correct 96.67% of the time within an uncertainty of ±0.03 c . We expect that this combination of a massive number of numerical simulations and machine-learning methods will help us analyze more complicated scenarios related to future high-resolution observations of black holes, like those from the Event Horizon Telescope.

  5. Relay Backpropagation for Effective Learning of Deep Convolutional Neural Networks

    OpenAIRE

    Shen, Li; Lin, Zhouchen; Huang, Qingming

    2015-01-01

    Learning deeper convolutional neural networks becomes a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be gained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, that encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015...

  6. ELHnet: a convolutional neural network for classifying cochlear endolymphatic hydrops imaged with optical coherence tomography.

    Science.gov (United States)

    Liu, George S; Zhu, Michael H; Kim, Jinkyung; Raphael, Patrick; Applegate, Brian E; Oghalai, John S

    2017-10-01

    Detection of endolymphatic hydrops is important for diagnosing Meniere's disease, and can be performed non-invasively using optical coherence tomography (OCT) in animal models as well as potentially in the clinic. Here, we developed ELHnet, a convolutional neural network to classify endolymphatic hydrops in a mouse model using learned features from OCT images of mice cochleae. We trained ELHnet on 2159 training and validation images from 17 mice, using only the image pixels and observer-determined labels of endolymphatic hydrops as the inputs. We tested ELHnet on 37 images from 37 mice that were previously not used, and found that the neural network correctly classified 34 of the 37 mice. This demonstrates an improvement in performance from previous work on computer-aided classification of endolymphatic hydrops. To the best of our knowledge, this is the first deep CNN designed for endolymphatic hydrops classification.

  7. Network Intrusion Detection through Stacking Dilated Convolutional Autoencoders

    Directory of Open Access Journals (Sweden)

    Yang Yu

    2017-01-01

    Full Text Available Network intrusion detection is one of the most important parts for cyber security to protect computer systems against malicious attacks. With the emergence of numerous sophisticated and new attacks, however, network intrusion detection techniques are facing several significant challenges. The overall objective of this study is to learn useful feature representations automatically and efficiently from large amounts of unlabeled raw network traffic data by using deep learning approaches. We propose a novel network intrusion model by stacking dilated convolutional autoencoders and evaluate our method on two new intrusion detection datasets. Several experiments were carried out to check the effectiveness of our approach. The comparative experimental results demonstrate that the proposed model can achieve considerably high performance which meets the demand of high accuracy and adaptability of network intrusion detection systems (NIDSs. It is quite potential and promising to apply our model in the large-scale and real-world network environments.

  8. Detection and recognition of bridge crack based on convolutional neural network

    Directory of Open Access Journals (Sweden)

    Honggong LIU

    2016-10-01

    Full Text Available Aiming at the backward artificial visual detection status of bridge crack in China, which has a great danger coefficient, a digital and intelligent detection method of improving the diagnostic efficiency and reducing the risk coefficient is studied. Combing with machine vision and convolutional neural network technology, Raspberry Pi is used to acquire and pre-process image, and the crack image is analyzed; the processing algorithm which has the best effect in detecting and recognizing is selected; the convolutional neural network(CNN for crack classification is optimized; finally, a new intelligent crack detection method is put forward. The experimental result shows that the system can find all cracks beyond the maximum limit, and effectively identify the type of fracture, and the recognition rate is above 90%. The study provides reference data for engineering detection.

  9. Convolutional neural network architectures for predicting DNA–protein binding

    Science.gov (United States)

    Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.

    2016-01-01

    Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608

  10. Deep convolutional neural networks for detection of rail surface defects

    NARCIS (Netherlands)

    Faghih Roohi, S.; Hajizadeh, S.; Nunez Vicencio, Alfredo; Babuska, R.; De Schutter, B.H.K.; Estevez, Pablo A.; Angelov, Plamen P.; Del Moral Hernandez, Emilio

    2016-01-01

    In this paper, we propose a deep convolutional neural network solution to the analysis of image data for the detection of rail surface defects. The images are obtained from many hours of automated video recordings. This huge amount of data makes it impossible to manually inspect the images and

  11. The Convolutional Visual Network for Identification and Reconstruction of NOvA Events

    Energy Technology Data Exchange (ETDEWEB)

    Psihas, Fernanda [Indiana U.

    2017-11-22

    In 2016 the NOvA experiment released results for the observation of oscillations in the vμ and ve channels as well as ve cross section measurements using neutrinos from Fermilab’s NuMI beam. These and other measurements in progress rely on the accurate identification and reconstruction of the neutrino flavor and energy recorded by our detectors. This presentation describes the first application of convolutional neural network technology for event identification and reconstruction in particle detectors like NOvA. The Convolutional Visual Network (CVN) Algorithm was developed for identification, categorization, and reconstruction of NOvA events. It increased the selection efficiency of the ve appearance signal by 40% and studies show potential impact to the vμ disappearance analysis.

  12. New decoding methods of interleaved burst error-correcting codes

    Science.gov (United States)

    Nakano, Y.; Kasahara, M.; Namekawa, T.

    1983-04-01

    A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.

  13. Semantically Secure Symmetric Encryption with Error Correction for Distributed Storage

    Directory of Open Access Journals (Sweden)

    Juha Partala

    2017-01-01

    Full Text Available A distributed storage system (DSS is a fundamental building block in many distributed applications. It applies linear network coding to achieve an optimal tradeoff between storage and repair bandwidth when node failures occur. Additively homomorphic encryption is compatible with linear network coding. The homomorphic property ensures that a linear combination of ciphertext messages decrypts to the same linear combination of the corresponding plaintext messages. In this paper, we construct a linearly homomorphic symmetric encryption scheme that is designed for a DSS. Our proposal provides simultaneous encryption and error correction by applying linear error correcting codes. We show its IND-CPA security for a limited number of messages based on binary Goppa codes and the following assumption: when dividing a scrambled generator matrix G^ into two parts G1^ and G2^, it is infeasible to distinguish G2^ from random and to find a statistical connection between G1^ and G2^. Our infeasibility assumptions are closely related to those underlying the McEliece public key cryptosystem but are considerably weaker. We believe that the proposed problem has independent cryptographic interest.

  14. Image inpainting and super-resolution using non-local recursive deep convolutional network with skip connections

    Science.gov (United States)

    Liu, Miaofeng

    2017-07-01

    In recent years, deep convolutional neural networks come into use in image inpainting and super-resolution in many fields. Distinct to most of the former methods requiring to know beforehand the local information for corrupted pixels, we propose a 20-depth fully convolutional network to learn an end-to-end mapping a dataset of damaged/ground truth subimage pairs realizing non-local blind inpainting and super-resolution. As there often exist image with huge corruptions or inpainting on a low-resolution image that the existing approaches unable to perform well, we also share parameters in local area of layers to achieve spatial recursion and enlarge the receptive field. To avoid the difficulty of training this deep neural network, skip-connections between symmetric convolutional layers are designed. Experimental results shows that the proposed method outperforms state-of-the-art methods for diverse corrupting and low-resolution conditions, it works excellently when realizing super-resolution and image inpainting simultaneously

  15. Shallow and deep convolutional networks for saliency prediction

    OpenAIRE

    Pan, Junting; Sayrol Clols, Elisa; Giró Nieto, Xavier; McGuinness, Kevin; O'Connor, Noel

    2016-01-01

    The prediction of salient areas in images has been traditionally addressed with hand-crafted features based on neuroscience principles. This paper, however, addresses the problem with a completely data-driven approach by training a convolutional neural network (convnet). The learning process is formulated as a minimization of a loss function that measures the Euclidean distance of the predicted saliency map with the provided ground truth. The recent publication of large datasets of saliency p...

  16. Errors and Correction of Precipitation Measurements in China

    Institute of Scientific and Technical Information of China (English)

    REN Zhihua; LI Mingqin

    2007-01-01

    In order to discover the range of various errors in Chinese precipitation measurements and seek a correction method, 30 precipitation evaluation stations were set up countrywide before 1993. All the stations are reference stations in China. To seek a correction method for wind-induced error, a precipitation correction instrument called the "horizontal precipitation gauge" was devised beforehand. Field intercomparison observations regarding 29,000 precipitation events have been conducted using one pit gauge, two elevated operational gauges and one horizontal gauge at the above 30 stations. The range of precipitation measurement errors in China is obtained by analysis of intercomparison measurement results. The distribution of random errors and systematic errors in precipitation measurements are studied in this paper.A correction method, especially for wind-induced errors, is developed. The results prove that a correlation of power function exists between the precipitation amount caught by the horizontal gauge and the absolute difference of observations implemented by the operational gauge and pit gauge. The correlation coefficient is 0.99. For operational observations, precipitation correction can be carried out only by parallel observation with a horizontal precipitation gauge. The precipitation accuracy after correction approaches that of the pit gauge. The correction method developed is simple and feasible.

  17. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks

    Science.gov (United States)

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-01-01

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network. PMID:26633400

  18. Fine-grained vehicle type recognition based on deep convolution neural networks

    Directory of Open Access Journals (Sweden)

    Hongcai CHEN

    2017-12-01

    Full Text Available Public security and traffic department put forward higher requirements for real-time performance and accuracy of vehicle type recognition in complex traffic scenes. Aiming at the problems of great plice forces occupation, low retrieval efficiency, and lacking of intelligence for dealing with false license, fake plate vehicles and vehicles without plates, this paper proposes a vehicle type fine-grained recognition method based GoogleNet deep convolution neural networks. The filter size and numbers of convolution neural network are designed, the activation function and vehicle type classifier are optimally selected, and a new network framework is constructed for vehicle type fine-grained recognition. The experimental results show that the proposed method has 97% accuracy for vehicle type fine-grained recognition and has greater improvement than the original GoogleNet model. Moreover, the new model effectively reduces the number of training parameters, and saves computer memory. Fine-grained vehicle type recognition can be used in intelligent traffic management area, and has important theoretical research value and practical significance.

  19. Detection of bars in galaxies using a deep convolutional neural network

    Science.gov (United States)

    Abraham, Sheelu; Aniyan, A. K.; Kembhavi, Ajit K.; Philip, N. S.; Vaghmare, Kaustubh

    2018-06-01

    We present an automated method for the detection of bar structure in optical images of galaxies using a deep convolutional neural network that is easy to use and provides good accuracy. In our study, we use a sample of 9346 galaxies in the redshift range of 0.009-0.2 from the Sloan Digital Sky Survey (SDSS), which has 3864 barred galaxies, the rest being unbarred. We reach a top precision of 94 per cent in identifying bars in galaxies using the trained network. This accuracy matches the accuracy reached by human experts on the same data without additional information about the images. Since deep convolutional neural networks can be scaled to handle large volumes of data, the method is expected to have great relevance in an era where astronomy data is rapidly increasing in terms of volume, variety, volatility, and velocity along with other V's that characterize big data. With the trained model, we have constructed a catalogue of barred galaxies from SDSS and made it available online.

  20. View-invariant gait recognition method by three-dimensional convolutional neural network

    Science.gov (United States)

    Xing, Weiwei; Li, Ying; Zhang, Shunli

    2018-01-01

    Gait as an important biometric feature can identify a human at a long distance. View change is one of the most challenging factors for gait recognition. To address the cross view issues in gait recognition, we propose a view-invariant gait recognition method by three-dimensional (3-D) convolutional neural network. First, 3-D convolutional neural network (3DCNN) is introduced to learn view-invariant feature, which can capture the spatial information and temporal information simultaneously on normalized silhouette sequences. Second, a network training method based on cross-domain transfer learning is proposed to solve the problem of the limited gait training samples. We choose the C3D as the basic model, which is pretrained on the Sports-1M and then fine-tune C3D model to adapt gait recognition. In the recognition stage, we use the fine-tuned model to extract gait features and use Euclidean distance to measure the similarity of gait sequences. Sufficient experiments are carried out on the CASIA-B dataset and the experimental results demonstrate that our method outperforms many other methods.

  1. Low-complexity object detection with deep convolutional neural network for embedded systems

    Science.gov (United States)

    Tripathi, Subarna; Kang, Byeongkeun; Dane, Gokce; Nguyen, Truong

    2017-09-01

    We investigate low-complexity convolutional neural networks (CNNs) for object detection for embedded vision applications. It is well-known that consolidation of an embedded system for CNN-based object detection is more challenging due to computation and memory requirement comparing with problems like image classification. To achieve these requirements, we design and develop an end-to-end TensorFlow (TF)-based fully-convolutional deep neural network for generic object detection task inspired by one of the fastest framework, YOLO.1 The proposed network predicts the localization of every object by regressing the coordinates of the corresponding bounding box as in YOLO. Hence, the network is able to detect any objects without any limitations in the size of the objects. However, unlike YOLO, all the layers in the proposed network is fully-convolutional. Thus, it is able to take input images of any size. We pick face detection as an use case. We evaluate the proposed model for face detection on FDDB dataset and Widerface dataset. As another use case of generic object detection, we evaluate its performance on PASCAL VOC dataset. The experimental results demonstrate that the proposed network can predict object instances of different sizes and poses in a single frame. Moreover, the results show that the proposed method achieves comparative accuracy comparing with the state-of-the-art CNN-based object detection methods while reducing the model size by 3× and memory-BW by 3 - 4× comparing with one of the best real-time CNN-based object detectors, YOLO. Our 8-bit fixed-point TF-model provides additional 4× memory reduction while keeping the accuracy nearly as good as the floating-point model. Moreover, the fixed- point model is capable of achieving 20× faster inference speed comparing with the floating-point model. Thus, the proposed method is promising for embedded implementations.

  2. A Study of Recurrent and Convolutional Neural Networks in the Native Language Identification Task

    KAUST Repository

    Werfelmann, Robert

    2018-01-01

    around the world. The neural network models consisted of Long Short-Term Memory and Convolutional networks using the sentences of each document as the input. Additional statistical features were generated from the text to complement the predictions

  3. Weed Growth Stage Estimator Using Deep Convolutional Neural Networks.

    Science.gov (United States)

    Teimouri, Nima; Dyrmann, Mads; Nielsen, Per Rydahl; Mathiassen, Solvejg Kopp; Somerville, Gayle J; Jørgensen, Rasmus Nyholm

    2018-05-16

    This study outlines a new method of automatically estimating weed species and growth stages (from cotyledon until eight leaves are visible) of in situ images covering 18 weed species or families. Images of weeds growing within a variety of crops were gathered across variable environmental conditions with regards to soil types, resolution and light settings. Then, 9649 of these images were used for training the computer, which automatically divided the weeds into nine growth classes. The performance of this proposed convolutional neural network approach was evaluated on a further set of 2516 images, which also varied in term of crop, soil type, image resolution and light conditions. The overall performance of this approach achieved a maximum accuracy of 78% for identifying Polygonum spp. and a minimum accuracy of 46% for blackgrass. In addition, it achieved an average 70% accuracy rate in estimating the number of leaves and 96% accuracy when accepting a deviation of two leaves. These results show that this new method of using deep convolutional neural networks has a relatively high ability to estimate early growth stages across a wide variety of weed species.

  4. Paediatric frontal chest radiograph screening with fine-tuned convolutional neural networks

    CSIR Research Space (South Africa)

    Gerrand, Jonathan D

    2017-07-01

    Full Text Available of fine-tuned convolutional neural networks (CNN). We use two popular CNN models that are pre-trained on a large natural image dataset and two distinct datasets containing paediatric and adult radiographs respectively. Evaluation is performed using a 5...

  5. Error Correction for Non-Abelian Topological Quantum Computation

    Directory of Open Access Journals (Sweden)

    James R. Wootton

    2014-03-01

    Full Text Available The possibility of quantum computation using non-Abelian anyons has been considered for over a decade. However, the question of how to obtain and process information about what errors have occurred in order to negate their effects has not yet been considered. This is in stark contrast with quantum computation proposals for Abelian anyons, for which decoding algorithms have been tailor-made for many topological error-correcting codes and error models. Here, we address this issue by considering the properties of non-Abelian error correction, in general. We also choose a specific anyon model and error model to probe the problem in more detail. The anyon model is the charge submodel of D(S_{3}. This shares many properties with important models such as the Fibonacci anyons, making our method more generally applicable. The error model is a straightforward generalization of those used in the case of Abelian anyons for initial benchmarking of error correction methods. It is found that error correction is possible under a threshold value of 7% for the total probability of an error on each physical spin. This is remarkably comparable with the thresholds for Abelian models.

  6. Correction of refractive errors

    Directory of Open Access Journals (Sweden)

    Vladimir Pfeifer

    2005-10-01

    Full Text Available Background: Spectacles and contact lenses are the most frequently used, the safest and the cheapest way to correct refractive errors. The development of keratorefractive surgery has brought new opportunities for correction of refractive errors in patients who have the need to be less dependent of spectacles or contact lenses. Until recently, RK was the most commonly performed refractive procedure for nearsighted patients.Conclusions: The introduction of excimer laser in refractive surgery has given the new opportunities of remodelling the cornea. The laser energy can be delivered on the stromal surface like in PRK or deeper on the corneal stroma by means of lamellar surgery. In LASIK flap is created with microkeratome in LASEK with ethanol and in epi-LASIK the ultra thin flap is created mechanically.

  7. A pre-trained convolutional neural network based method for thyroid nodule diagnosis.

    Science.gov (United States)

    Ma, Jinlian; Wu, Fa; Zhu, Jiang; Xu, Dong; Kong, Dexing

    2017-01-01

    In ultrasound images, most thyroid nodules are in heterogeneous appearances with various internal components and also have vague boundaries, so it is difficult for physicians to discriminate malignant thyroid nodules from benign ones. In this study, we propose a hybrid method for thyroid nodule diagnosis, which is a fusion of two pre-trained convolutional neural networks (CNNs) with different convolutional layers and fully-connected layers. Firstly, the two networks pre-trained with ImageNet database are separately trained. Secondly, we fuse feature maps learned by trained convolutional filters, pooling and normalization operations of the two CNNs. Finally, with the fused feature maps, a softmax classifier is used to diagnose thyroid nodules. The proposed method is validated on 15,000 ultrasound images collected from two local hospitals. Experiment results show that the proposed CNN based methods can accurately and effectively diagnose thyroid nodules. In addition, the fusion of the two CNN based models lead to significant performance improvement, with an accuracy of 83.02%±0.72%. These demonstrate the potential clinical applications of this method. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Continuous quantum error correction for non-Markovian decoherence

    International Nuclear Information System (INIS)

    Oreshkov, Ognyan; Brun, Todd A.

    2007-01-01

    We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximately follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics

  9. Semantic Segmentation of Convolutional Neural Network for Supervised Classification of Multispectral Remote Sensing

    Science.gov (United States)

    Xue, L.; Liu, C.; Wu, Y.; Li, H.

    2018-04-01

    Semantic segmentation is a fundamental research in remote sensing image processing. Because of the complex maritime environment, the classification of roads, vegetation, buildings and water from remote Sensing Imagery is a challenging task. Although the neural network has achieved excellent performance in semantic segmentation in the last years, there are a few of works using CNN for ground object segmentation and the results could be further improved. This paper used convolution neural network named U-Net, its structure has a contracting path and an expansive path to get high resolution output. In the network , We added BN layers, which is more conducive to the reverse pass. Moreover, after upsampling convolution , we add dropout layers to prevent overfitting. They are promoted to get more precise segmentation results. To verify this network architecture, we used a Kaggle dataset. Experimental results show that U-Net achieved good performance compared with other architectures, especially in high-resolution remote sensing imagery.

  10. DeepNAT: Deep convolutional neural network for segmenting neuroanatomy.

    Science.gov (United States)

    Wachinger, Christian; Reuter, Martin; Klein, Tassilo

    2018-04-15

    We introduce DeepNAT, a 3D Deep convolutional neural network for the automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance images. DeepNAT is an end-to-end learning-based approach to brain segmentation that jointly learns an abstract feature representation and a multi-class classification. We propose a 3D patch-based approach, where we do not only predict the center voxel of the patch but also neighbors, which is formulated as multi-task learning. To address a class imbalance problem, we arrange two networks hierarchically, where the first one separates foreground from background, and the second one identifies 25 brain structures on the foreground. Since patches lack spatial context, we augment them with coordinates. To this end, we introduce a novel intrinsic parameterization of the brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. As network architecture, we use three convolutional layers with pooling, batch normalization, and non-linearities, followed by fully connected layers with dropout. The final segmentation is inferred from the probabilistic output of the network with a 3D fully connected conditional random field, which ensures label agreement between close voxels. The roughly 2.7million parameters in the network are learned with stochastic gradient descent. Our results show that DeepNAT compares favorably to state-of-the-art methods. Finally, the purely learning-based method may have a high potential for the adaptation to young, old, or diseased brains by fine-tuning the pre-trained network with a small training sample on the target application, where the availability of larger datasets with manual annotations may boost the overall segmentation accuracy in the future. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology.

    Science.gov (United States)

    Sharma, Harshita; Zerbe, Norman; Klempert, Iris; Hellwich, Olaf; Hufnagl, Peter

    2017-11-01

    Deep learning using convolutional neural networks is an actively emerging field in histological image analysis. This study explores deep learning methods for computer-aided classification in H&E stained histopathological whole slide images of gastric carcinoma. An introductory convolutional neural network architecture is proposed for two computerized applications, namely, cancer classification based on immunohistochemical response and necrosis detection based on the existence of tumor necrosis in the tissue. Classification performance of the developed deep learning approach is quantitatively compared with traditional image analysis methods in digital histopathology requiring prior computation of handcrafted features, such as statistical measures using gray level co-occurrence matrix, Gabor filter-bank responses, LBP histograms, gray histograms, HSV histograms and RGB histograms, followed by random forest machine learning. Additionally, the widely known AlexNet deep convolutional framework is comparatively analyzed for the corresponding classification problems. The proposed convolutional neural network architecture reports favorable results, with an overall classification accuracy of 0.6990 for cancer classification and 0.8144 for necrosis detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Unitary Application of the Quantum Error Correction Codes

    International Nuclear Information System (INIS)

    You Bo; Xu Ke; Wu Xiaohua

    2012-01-01

    For applying the perfect code to transmit quantum information over a noise channel, the standard protocol contains four steps: the encoding, the noise channel, the error-correction operation, and the decoding. In present work, we show that this protocol can be simplified. The error-correction operation is not necessary if the decoding is realized by the so-called complete unitary transformation. We also offer a quantum circuit, which can correct the arbitrary single-qubit errors.

  13. Multi-Branch Fully Convolutional Network for Face Detection

    KAUST Repository

    Bai, Yancheng

    2017-07-20

    Face detection is a fundamental problem in computer vision. It is still a challenging task in unconstrained conditions due to significant variations in scale, pose, expressions, and occlusion. In this paper, we propose a multi-branch fully convolutional network (MB-FCN) for face detection, which considers both efficiency and effectiveness in the design process. Our MB-FCN detector can deal with faces at all scale ranges with only a single pass through the backbone network. As such, our MB-FCN model saves computation and thus is more efficient, compared to previous methods that make multiple passes. For each branch, the specific skip connections of the convolutional feature maps at different layers are exploited to represent faces in specific scale ranges. Specifically, small faces can be represented with both shallow fine-grained and deep powerful coarse features. With this representation, superior improvement in performance is registered for the task of detecting small faces. We test our MB-FCN detector on two public face detection benchmarks, including FDDB and WIDER FACE. Extensive experiments show that our detector outperforms state-of-the-art methods on all these datasets in general and by a substantial margin on the most challenging among them (e.g. WIDER FACE Hard subset). Also, MB-FCN runs at 15 FPS on a GPU for images of size 640 x 480 with no assumption on the minimum detectable face size.

  14. Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network.

    Science.gov (United States)

    Yoon, Jaehong; Lee, Jungnyun; Whang, Mincheol

    2018-01-01

    Feature of event-related potential (ERP) has not been completely understood and illiteracy problem remains unsolved. To this end, P300 peak has been used as the feature of ERP in most brain-computer interface applications, but subjects who do not show such peak are common. Recent development of convolutional neural network provides a way to analyze spatial and temporal features of ERP. Here, we train the convolutional neural network with 2 convolutional layers whose feature maps represented spatial and temporal features of event-related potential. We have found that nonilliterate subjects' ERP show high correlation between occipital lobe and parietal lobe, whereas illiterate subjects only show correlation between neural activities from frontal lobe and central lobe. The nonilliterates showed peaks in P300, P500, and P700, whereas illiterates mostly showed peaks in around P700. P700 was strong in both subjects. We found that P700 peak may be the key feature of ERP as it appears in both illiterate and nonilliterate subjects.

  15. Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices.

    Science.gov (United States)

    Gokmen, Tayfun; Onen, Murat; Haensch, Wilfried

    2017-01-01

    In a previous work we have detailed the requirements for obtaining maximal deep learning performance benefit by implementing fully connected deep neural networks (DNN) in the form of arrays of resistive devices. Here we extend the concept of Resistive Processing Unit (RPU) devices to convolutional neural networks (CNNs). We show how to map the convolutional layers to fully connected RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. We find that the noise and bound limitations imposed by the analog nature of the computations performed on the arrays significantly affect the training accuracy of the CNNs. Noise and bound management techniques are presented that mitigate these problems without introducing any additional complexity in the analog circuits and that can be addressed by the digital circuits. In addition, we discuss digitally programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that a combination of all those techniques enables a successful application of the RPU concept for training CNNs. The techniques discussed here are more general and can be applied beyond CNN architectures and therefore enables applicability of the RPU approach to a large class of neural network architectures.

  16. Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices

    Science.gov (United States)

    Gokmen, Tayfun; Onen, Murat; Haensch, Wilfried

    2017-01-01

    In a previous work we have detailed the requirements for obtaining maximal deep learning performance benefit by implementing fully connected deep neural networks (DNN) in the form of arrays of resistive devices. Here we extend the concept of Resistive Processing Unit (RPU) devices to convolutional neural networks (CNNs). We show how to map the convolutional layers to fully connected RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. We find that the noise and bound limitations imposed by the analog nature of the computations performed on the arrays significantly affect the training accuracy of the CNNs. Noise and bound management techniques are presented that mitigate these problems without introducing any additional complexity in the analog circuits and that can be addressed by the digital circuits. In addition, we discuss digitally programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that a combination of all those techniques enables a successful application of the RPU concept for training CNNs. The techniques discussed here are more general and can be applied beyond CNN architectures and therefore enables applicability of the RPU approach to a large class of neural network architectures. PMID:29066942

  17. Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber

    International Nuclear Information System (INIS)

    Acciarri, R.; Adams, C.; An, R.; Asaadi, J.; Auger, M.

    2017-01-01

    Here, we present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. Lastly, we also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level.

  18. Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber

    Energy Technology Data Exchange (ETDEWEB)

    Acciarri, R.; Adams, C.; An, R.; Asaadi, J.; Auger, M.; Bagby, L.; Baller, B.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Bugel, L.; Camilleri, L.; Caratelli, D.; Carls, B.; Fernandez, R. Castillo; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anad?n, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Sanchez, L. Escudero; Esquivel, J.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; James, C.; de Vries, J. Jan; Jen, C. -M.; Jiang, L.; Johnson, R. A.; Jones, B. J. P.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Caicedo, D. A. Martinez; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; von Rohr, C. Rudolf; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Snider, E. L.; Soderberg, M.; S?ldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y. -T.; Tufanli, S.; Usher, T.; Van de Water, R. G.; Viren, B.; Weber, M.; Weston, J.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Zeller, G. P.; Zennamo, J.; Zhang, C.

    2017-03-01

    We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level.

  19. Automatic segmentation of MR brain images with a convolutional neural network

    NARCIS (Netherlands)

    Moeskops, P.; Viergever, M.A.; Mendrik, A.M.; de Vries, L.S.; Benders, M.J.N.L.; Išgum, I.

    2016-01-01

    Automatic segmentation in MR brain images is important for quantitative analysis in large-scale studies with images acquired at all ages. This paper presents a method for the automatic segmentation of MR brain images into a number of tissue classes using a convolutional neural network. To ensure

  20. Time-dependent phase error correction using digital waveform synthesis

    Science.gov (United States)

    Doerry, Armin W.; Buskirk, Stephen

    2017-10-10

    The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.

  1. Correcting quantum errors with entanglement.

    Science.gov (United States)

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  2. Abnormality Detection in Mammography using Deep Convolutional Neural Networks

    OpenAIRE

    Xi, Pengcheng; Shu, Chang; Goubran, Rafik

    2018-01-01

    Breast cancer is the most common cancer in women worldwide. The most common screening technology is mammography. To reduce the cost and workload of radiologists, we propose a computer aided detection approach for classifying and localizing calcifications and masses in mammogram images. To improve on conventional approaches, we apply deep convolutional neural networks (CNN) for automatic feature learning and classifier building. In computer-aided mammography, deep CNN classifiers cannot be tra...

  3. Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.

    Science.gov (United States)

    Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus

    2017-01-01

    Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.

  4. Effective image differencing with convolutional neural networks for real-time transient hunting

    Science.gov (United States)

    Sedaghat, Nima; Mahabal, Ashish

    2018-06-01

    Large sky surveys are increasingly relying on image subtraction pipelines for real-time (and archival) transient detection. In this process one has to contend with varying point-spread function (PSF) and small brightness variations in many sources, as well as artefacts resulting from saturated stars and, in general, matching errors. Very often the differencing is done with a reference image that is deeper than individual images and the attendant difference in noise characteristics can also lead to artefacts. We present here a deep-learning approach to transient detection that encapsulates all the steps of a traditional image-subtraction pipeline - image registration, background subtraction, noise removal, PSF matching and subtraction - in a single real-time convolutional network. Once trained, the method works lightening-fast and, given that it performs multiple steps in one go, the time saved and false positives eliminated for multi-CCD surveys like Zwicky Transient Facility and Large Synoptic Survey Telescope will be immense, as millions of subtractions will be needed per night.

  5. Co-trained convolutional neural networks for automated detection of prostate cancer in multi-parametric MRI.

    Science.gov (United States)

    Yang, Xin; Liu, Chaoyue; Wang, Zhiwei; Yang, Jun; Min, Hung Le; Wang, Liang; Cheng, Kwang-Ting Tim

    2017-12-01

    Multi-parameter magnetic resonance imaging (mp-MRI) is increasingly popular for prostate cancer (PCa) detection and diagnosis. However, interpreting mp-MRI data which typically contains multiple unregistered 3D sequences, e.g. apparent diffusion coefficient (ADC) and T2-weighted (T2w) images, is time-consuming and demands special expertise, limiting its usage for large-scale PCa screening. Therefore, solutions to computer-aided detection of PCa in mp-MRI images are highly desirable. Most recent advances in automated methods for PCa detection employ a handcrafted feature based two-stage classification flow, i.e. voxel-level classification followed by a region-level classification. This work presents an automated PCa detection system which can concurrently identify the presence of PCa in an image and localize lesions based on deep convolutional neural network (CNN) features and a single-stage SVM classifier. Specifically, the developed co-trained CNNs consist of two parallel convolutional networks for ADC and T2w images respectively. Each network is trained using images of a single modality in a weakly-supervised manner by providing a set of prostate images with image-level labels indicating only the presence of PCa without priors of lesions' locations. Discriminative visual patterns of lesions can be learned effectively from clutters of prostate and surrounding tissues. A cancer response map with each pixel indicating the likelihood to be cancerous is explicitly generated at the last convolutional layer of the network for each modality. A new back-propagated error E is defined to enforce both optimized classification results and consistent cancer response maps for different modalities, which help capture highly representative PCa-relevant features during the CNN feature learning process. The CNN features of each modality are concatenated and fed into a SVM classifier. For images which are classified to contain cancers, non-maximum suppression and adaptive

  6. Cardiac Arrhythmia Classification by Multi-Layer Perceptron and Convolution Neural Networks

    Directory of Open Access Journals (Sweden)

    Shalin Savalia

    2018-05-01

    Full Text Available The electrocardiogram (ECG plays an imperative role in the medical field, as it records heart signal over time and is used to discover numerous cardiovascular diseases. If a documented ECG signal has a certain irregularity in its predefined features, this is called arrhythmia, the types of which include tachycardia, bradycardia, supraventricular arrhythmias, and ventricular, etc. This has encouraged us to do research that consists of distinguishing between several arrhythmias by using deep neural network algorithms such as multi-layer perceptron (MLP and convolution neural network (CNN. The TensorFlow library that was established by Google for deep learning and machine learning is used in python to acquire the algorithms proposed here. The ECG databases accessible at PhysioBank.com and kaggle.com were used for training, testing, and validation of the MLP and CNN algorithms. The proposed algorithm consists of four hidden layers with weights, biases in MLP, and four-layer convolution neural networks which map ECG samples to the different classes of arrhythmia. The accuracy of the algorithm surpasses the performance of the current algorithms that have been developed by other cardiologists in both sensitivity and precision.

  7. Cardiac Arrhythmia Classification by Multi-Layer Perceptron and Convolution Neural Networks.

    Science.gov (United States)

    Savalia, Shalin; Emamian, Vahid

    2018-05-04

    The electrocardiogram (ECG) plays an imperative role in the medical field, as it records heart signal over time and is used to discover numerous cardiovascular diseases. If a documented ECG signal has a certain irregularity in its predefined features, this is called arrhythmia, the types of which include tachycardia, bradycardia, supraventricular arrhythmias, and ventricular, etc. This has encouraged us to do research that consists of distinguishing between several arrhythmias by using deep neural network algorithms such as multi-layer perceptron (MLP) and convolution neural network (CNN). The TensorFlow library that was established by Google for deep learning and machine learning is used in python to acquire the algorithms proposed here. The ECG databases accessible at PhysioBank.com and kaggle.com were used for training, testing, and validation of the MLP and CNN algorithms. The proposed algorithm consists of four hidden layers with weights, biases in MLP, and four-layer convolution neural networks which map ECG samples to the different classes of arrhythmia. The accuracy of the algorithm surpasses the performance of the current algorithms that have been developed by other cardiologists in both sensitivity and precision.

  8. Synthetic bootstrapping of convolutional neural networks for semantic plant part segmentation

    NARCIS (Netherlands)

    Barth, R.; IJsselmuiden, J.; Hemming, J.; Henten, Van E.J.

    2017-01-01

    A current bottleneck of state-of-the-art machine learning methods for image segmentation in agriculture, e.g. convolutional neural networks (CNNs), is the requirement of large manually annotated datasets on a per-pixel level. In this paper, we investigated how related synthetic images can be used to

  9. Detection of high-grade small bowel obstruction on conventional radiography with convolutional neural networks.

    Science.gov (United States)

    Cheng, Phillip M; Tejura, Tapas K; Tran, Khoa N; Whang, Gilbert

    2018-05-01

    The purpose of this pilot study is to determine whether a deep convolutional neural network can be trained with limited image data to detect high-grade small bowel obstruction patterns on supine abdominal radiographs. Grayscale images from 3663 clinical supine abdominal radiographs were categorized into obstructive and non-obstructive categories independently by three abdominal radiologists, and the majority classification was used as ground truth; 74 images were found to be consistent with small bowel obstruction. Images were rescaled and randomized, with 2210 images constituting the training set (39 with small bowel obstruction) and 1453 images constituting the test set (35 with small bowel obstruction). Weight parameters for the final classification layer of the Inception v3 convolutional neural network, previously trained on the 2014 Large Scale Visual Recognition Challenge dataset, were retrained on the training set. After training, the neural network achieved an AUC of 0.84 on the test set (95% CI 0.78-0.89). At the maximum Youden index (sensitivity + specificity-1), the sensitivity of the system for small bowel obstruction is 83.8%, with a specificity of 68.1%. The results demonstrate that transfer learning with convolutional neural networks, even with limited training data, may be used to train a detector for high-grade small bowel obstruction gas patterns on supine radiographs.

  10. Mapping and correction of the CMM workspace error with the use of an electronic gyroscope and neural networks--practical application.

    Science.gov (United States)

    Swornowski, Pawel J

    2013-01-01

    The article presents the application of neural networks in determining and correction of the deformation of a coordinate measuring machine (CMM) workspace. The information about the CMM errors is acquired using an ADXRS401 electronic gyroscope. A test device (PS-20 module) was built and integrated with a commercial measurement system based on the SP25M passive scanning probe and with a PH10M module (Renishaw). The proposed solution was tested on a Kemco 600 CMM and on a DEA Global Clima CMM. In the former case, correction of the CMM errors was performed using the source code of WinIOS software owned by The Institute of Advanced Manufacturing Technology, Cracow, Poland and in the latter on an external PC. Optimum parameters of full and simplified mapping of a given layer of the CMM workspace were determined for practical applications. The proposed method can be employed for the interim check (ISO 10360-2 procedure) or to detect local CMM deformations, occurring when the CMM works at high scanning speeds (>20 mm/s). © Wiley Periodicals, Inc.

  11. Convolutional neural networks for segmentation and object detection of human semen

    DEFF Research Database (Denmark)

    Nissen, Malte Stær; Krause, Oswin; Almstrup, Kristian

    2017-01-01

    We compare a set of convolutional neural network (CNN) architectures for the task of segmenting and detecting human sperm cells in an image taken from a semen sample. In contrast to previous work, samples are not stained or washed to allow for full sperm quality analysis, making analysis harder due...

  12. Iterative optimization of quantum error correcting codes

    International Nuclear Information System (INIS)

    Reimpell, M.; Werner, R.F.

    2005-01-01

    We introduce a convergent iterative algorithm for finding the optimal coding and decoding operations for an arbitrary noisy quantum channel. This algorithm does not require any error syndrome to be corrected completely, and hence also finds codes outside the usual Knill-Laflamme definition of error correcting codes. The iteration is shown to improve the figure of merit 'channel fidelity' in every step

  13. Matching of Remote Sensing Images with Complex Background Variations via Siamese Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Haiqing He

    2018-02-01

    Full Text Available Feature-based matching methods have been widely used in remote sensing image matching given their capability to achieve excellent performance despite image geometric and radiometric distortions. However, most of the feature-based methods are unreliable for complex background variations, because the gradient or other image grayscale information used to construct the feature descriptor is sensitive to image background variations. Recently, deep learning-based methods have been proven suitable for high-level feature representation and comparison in image matching. Inspired by the progresses made in deep learning, a new technical framework for remote sensing image matching based on the Siamese convolutional neural network is presented in this paper. First, a Siamese-type network architecture is designed to simultaneously learn the features and the corresponding similarity metric from labeled training examples of matching and non-matching true-color patch pairs. In the proposed network, two streams of convolutional and pooling layers sharing identical weights are arranged without the manually designed features. The number of convolutional layers is determined based on the factors that affect image matching. The sigmoid function is employed to compute the matching and non-matching probabilities in the output layer. Second, a gridding sub-pixel Harris algorithm is used to obtain the accurate localization of candidate matches. Third, a Gaussian pyramid coupling quadtree is adopted to gradually narrow down the searching space of the candidate matches, and multiscale patches are compared synchronously. Subsequently, a similarity measure based on the output of the sigmoid is adopted to find the initial matches. Finally, the random sample consensus algorithm and the whole-to-local quadratic polynomial constraints are used to remove false matches. In the experiments, different types of satellite datasets, such as ZY3, GF1, IKONOS, and Google Earth images

  14. Rank error-correcting pairs

    DEFF Research Database (Denmark)

    Martinez Peñas, Umberto; Pellikaan, Ruud

    2017-01-01

    Error-correcting pairs were introduced as a general method of decoding linear codes with respect to the Hamming metric using coordinatewise products of vectors, and are used for many well-known families of codes. In this paper, we define new types of vector products, extending the coordinatewise ...

  15. Hierarchical graphical-based human pose estimation via local multi-resolution convolutional neural network

    Science.gov (United States)

    Zhu, Aichun; Wang, Tian; Snoussi, Hichem

    2018-03-01

    This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN). Firstly, a Relative Mixture Deformable Model (RMDM) is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN) is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.

  16. Hierarchical graphical-based human pose estimation via local multi-resolution convolutional neural network

    Directory of Open Access Journals (Sweden)

    Aichun Zhu

    2018-03-01

    Full Text Available This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN. Firstly, a Relative Mixture Deformable Model (RMDM is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.

  17. A New Missing Values Estimation Algorithm in Wireless Sensor Networks Based on Convolution

    Directory of Open Access Journals (Sweden)

    Feng Liu

    2013-04-01

    Full Text Available Nowadays, with the rapid development of Internet of Things (IoT applications, data missing phenomenon becomes very common in wireless sensor networks. This problem can greatly and directly threaten the stability and usability of the Internet of things applications which are constructed based on wireless sensor networks. How to estimate the missing value has attracted wide interest, and some solutions have been proposed. Different with the previous works, in this paper, we proposed a new convolution based missing value estimation algorithm. The convolution theory, which is usually used in the area of signal and image processing, can also be a practical and efficient way to estimate the missing sensor data. The results show that the proposed algorithm in this paper is practical and effective, and can estimate the missing value accurately.

  18. Deep Convolutional Neural Networks for Classifying Body Constitution Based on Face Image.

    Science.gov (United States)

    Huan, Er-Yang; Wen, Gui-Hua; Zhang, Shi-Jun; Li, Dan-Yang; Hu, Yang; Chang, Tian-Yuan; Wang, Qing; Huang, Bing-Lin

    2017-01-01

    Body constitution classification is the basis and core content of traditional Chinese medicine constitution research. It is to extract the relevant laws from the complex constitution phenomenon and finally build the constitution classification system. Traditional identification methods have the disadvantages of inefficiency and low accuracy, for instance, questionnaires. This paper proposed a body constitution recognition algorithm based on deep convolutional neural network, which can classify individual constitution types according to face images. The proposed model first uses the convolutional neural network to extract the features of face image and then combines the extracted features with the color features. Finally, the fusion features are input to the Softmax classifier to get the classification result. Different comparison experiments show that the algorithm proposed in this paper can achieve the accuracy of 65.29% about the constitution classification. And its performance was accepted by Chinese medicine practitioners.

  19. Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices

    Directory of Open Access Journals (Sweden)

    Tayfun Gokmen

    2017-10-01

    Full Text Available In a previous work we have detailed the requirements for obtaining maximal deep learning performance benefit by implementing fully connected deep neural networks (DNN in the form of arrays of resistive devices. Here we extend the concept of Resistive Processing Unit (RPU devices to convolutional neural networks (CNNs. We show how to map the convolutional layers to fully connected RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. We find that the noise and bound limitations imposed by the analog nature of the computations performed on the arrays significantly affect the training accuracy of the CNNs. Noise and bound management techniques are presented that mitigate these problems without introducing any additional complexity in the analog circuits and that can be addressed by the digital circuits. In addition, we discuss digitally programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that a combination of all those techniques enables a successful application of the RPU concept for training CNNs. The techniques discussed here are more general and can be applied beyond CNN architectures and therefore enables applicability of the RPU approach to a large class of neural network architectures.

  20. THE SELF-CORRECTION OF ENGLISH SPEECH ERRORS IN SECOND LANGUANGE LEARNING

    Directory of Open Access Journals (Sweden)

    Ketut Santi Indriani

    2015-05-01

    Full Text Available The process of second language (L2 learning is strongly influenced by the factors of error reconstruction that occur when the language is learned. Errors will definitely appear in the learning process. However, errors can be used as a step to accelerate the process of understanding the language. Doing self-correction (with or without giving cues is one of the examples. In the aspect of speaking, self-correction is done immediately after the error appears. This study is aimed at finding (i what speech errors the L2 speakers are able to identify, (ii of the errors identified, what speech errors the L2 speakers are able to self correct and (iii whether the self-correction of speech error are able to immediately improve the L2 learning. Based on the data analysis, it was found that the majority identified errors are related to noun (plurality, subject-verb agreement, grammatical structure and pronunciation.. B2 speakers tend to correct errors properly. Of the 78% identified speech errors, as much as 66% errors could be self-corrected accurately by the L2 speakers. Based on the analysis, it was also found that self-correction is able to improve L2 learning ability directly. This is evidenced by the absence of repetition of the same error after the error had been corrected.

  1. Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Jaehong Yoon

    2018-01-01

    Full Text Available Feature of event-related potential (ERP has not been completely understood and illiteracy problem remains unsolved. To this end, P300 peak has been used as the feature of ERP in most brain–computer interface applications, but subjects who do not show such peak are common. Recent development of convolutional neural network provides a way to analyze spatial and temporal features of ERP. Here, we train the convolutional neural network with 2 convolutional layers whose feature maps represented spatial and temporal features of event-related potential. We have found that nonilliterate subjects’ ERP show high correlation between occipital lobe and parietal lobe, whereas illiterate subjects only show correlation between neural activities from frontal lobe and central lobe. The nonilliterates showed peaks in P300, P500, and P700, whereas illiterates mostly showed peaks in around P700. P700 was strong in both subjects. We found that P700 peak may be the key feature of ERP as it appears in both illiterate and nonilliterate subjects.

  2. Strabismus Recognition Using Eye-Tracking Data and Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Zenghai Chen

    2018-01-01

    Full Text Available Strabismus is one of the most common vision diseases that would cause amblyopia and even permanent vision loss. Timely diagnosis is crucial for well treating strabismus. In contrast to manual diagnosis, automatic recognition can significantly reduce labor cost and increase diagnosis efficiency. In this paper, we propose to recognize strabismus using eye-tracking data and convolutional neural networks. In particular, an eye tracker is first exploited to record a subject’s eye movements. A gaze deviation (GaDe image is then proposed to characterize the subject’s eye-tracking data according to the accuracies of gaze points. The GaDe image is fed to a convolutional neural network (CNN that has been trained on a large image database called ImageNet. The outputs of the full connection layers of the CNN are used as the GaDe image’s features for strabismus recognition. A dataset containing eye-tracking data of both strabismic subjects and normal subjects is established for experiments. Experimental results demonstrate that the natural image features can be well transferred to represent eye-tracking data, and strabismus can be effectively recognized by our proposed method.

  3. Yarn-dyed fabric defect classification based on convolutional neural network

    Science.gov (United States)

    Jing, Junfeng; Dong, Amei; Li, Pengfei; Zhang, Kaibing

    2017-09-01

    Considering that manual inspection of the yarn-dyed fabric can be time consuming and inefficient, we propose a yarn-dyed fabric defect classification method by using a convolutional neural network (CNN) based on a modified AlexNet. CNN shows powerful ability in performing feature extraction and fusion by simulating the learning mechanism of human brain. The local response normalization layers in AlexNet are replaced by the batch normalization layers, which can enhance both the computational efficiency and classification accuracy. In the training process of the network, the characteristics of the defect are extracted step by step and the essential features of the image can be obtained from the fusion of the edge details with several convolution operations. Then the max-pooling layers, the dropout layers, and the fully connected layers are employed in the classification model to reduce the computation cost and extract more precise features of the defective fabric. Finally, the results of the defect classification are predicted by the softmax function. The experimental results show promising performance with an acceptable average classification rate and strong robustness on yarn-dyed fabric defect classification.

  4. Convolutional neural networks and face recognition task

    Science.gov (United States)

    Sochenkova, A.; Sochenkov, I.; Makovetskii, A.; Vokhmintsev, A.; Melnikov, A.

    2017-09-01

    Computer vision tasks are remaining very important for the last couple of years. One of the most complicated problems in computer vision is face recognition that could be used in security systems to provide safety and to identify person among the others. There is a variety of different approaches to solve this task, but there is still no universal solution that would give adequate results in some cases. Current paper presents following approach. Firstly, we extract an area containing face, then we use Canny edge detector. On the next stage we use convolutional neural networks (CNN) to finally solve face recognition and person identification task.

  5. The impact of measurement errors in the identification of regulatory networks

    Directory of Open Access Journals (Sweden)

    Sato João R

    2009-12-01

    Full Text Available Abstract Background There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent and non-time series (independent data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models and dependent (autoregressive models data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error. The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.

  6. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    OpenAIRE

    Francisco Javier Ordóñez; Daniel Roggen

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we pro...

  7. Subsidence feature discrimination using deep convolutional neral networks in synthetic aperture radar imagery

    CSIR Research Space (South Africa)

    Schwegmann, Colin P

    2017-07-01

    Full Text Available International Geoscience and Remote Sensing Symposium (IGARSS), 23-28 July 2017, Fort Worth, TX, USA SUBSIDENCE FEATURE DISCRIMINATION USING DEEP CONVOLUTIONAL NEURAL NETWORKS IN SYNTHETIC APERTURE RADAR IMAGERY Schwegmann, Colin P Kleynhans, Waldo...

  8. CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes

    OpenAIRE

    Li, Yuhong; Zhang, Xiaofan; Chen, Deming

    2018-01-01

    We propose a network for Congested Scene Recognition called CSRNet to provide a data-driven and deep learning method that can understand highly congested scenes and perform accurate count estimation as well as present high-quality density maps. The proposed CSRNet is composed of two major components: a convolutional neural network (CNN) as the front-end for 2D feature extraction and a dilated CNN for the back-end, which uses dilated kernels to deliver larger reception fields and to replace po...

  9. User-generated content curation with deep convolutional neural networks

    OpenAIRE

    Tous Liesa, Rubén; Wust, Otto; Gómez, Mauro; Poveda, Jonatan; Elena, Marc; Torres Viñals, Jordi; Makni, Mouna; Ayguadé Parra, Eduard

    2016-01-01

    In this paper, we report a work consisting in using deep convolutional neural networks (CNNs) for curating and filtering photos posted by social media users (Instagram and Twitter). The final goal is to facilitate searching and discovering user-generated content (UGC) with potential value for digital marketing tasks. The images are captured in real time and automatically annotated with multiple CNNs. Some of the CNNs perform generic object recognition tasks while others perform what we call v...

  10. Haptic Data Processing for Teleoperation Systems: Prediction, Compression and Error Correction

    OpenAIRE

    Lee, Jae-young

    2013-01-01

    This thesis explores haptic data processing methods for teleoperation systems, including prediction, compression, and error correction. In the proposed haptic data prediction method, unreliable network conditions, such as time-varying delay and packet loss, are detected by a transport layer protocol. Given the information from the transport layer, a Bayesian approach is introduced to predict position and force data in haptic teleoperation systems. Stability of the proposed method within stoch...

  11. Reed-Solomon error-correction as a software patch mechanism.

    Energy Technology Data Exchange (ETDEWEB)

    Pendley, Kevin D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-11-01

    This report explores how error-correction data generated by a Reed-Solomon code may be used as a mechanism to apply changes to an existing installed codebase. Using the Reed-Solomon code to generate error-correction data for a changed or updated codebase will allow the error-correction data to be applied to an existing codebase to both validate and introduce changes or updates from some upstream source to the existing installed codebase.

  12. Study on the influence of stochastic properties of correction terms on the reliability of instantaneous network RTK

    Science.gov (United States)

    Próchniewicz, Dominik

    2014-03-01

    The reliability of precision GNSS positioning primarily depends on correct carrier-phase ambiguity resolution. An optimal estimation and correct validation of ambiguities necessitates a proper definition of mathematical positioning model. Of particular importance in the model definition is the taking into account of the atmospheric errors (ionospheric and tropospheric refraction) as well as orbital errors. The use of the network of reference stations in kinematic positioning, known as Network-based Real-Time Kinematic (Network RTK) solution, facilitates the modeling of such errors and their incorporation, in the form of correction terms, into the functional description of positioning model. Lowered accuracy of corrections, especially during atmospheric disturbances, results in the occurrence of unaccounted biases, the so-called residual errors. The taking into account of such errors in Network RTK positioning model is possible by incorporating the accuracy characteristics of the correction terms into the stochastic model of observations. In this paper we investigate the impact of the expansion of the stochastic model to include correction term variances on the reliability of the model solution. In particular the results of instantaneous solution that only utilizes a single epoch of GPS observations, is analyzed. Such a solution mode due to the low number of degrees of freedom is very sensitive to an inappropriate mathematical model definition. Thus the high level of the solution reliability is very difficult to achieve. Numerical tests performed for a test network located in mountain area during ionospheric disturbances allows to verify the described method for the poor measurement conditions. The results of the ambiguity resolution as well as the rover positioning accuracy shows that the proposed method of stochastic modeling can increase the reliability of instantaneous Network RTK performance.

  13. DeepCotton: in-field cotton segmentation using deep fully convolutional network

    Science.gov (United States)

    Li, Yanan; Cao, Zhiguo; Xiao, Yang; Cremers, Armin B.

    2017-09-01

    Automatic ground-based in-field cotton (IFC) segmentation is a challenging task in precision agriculture, which has not been well addressed. Nearly all the existing methods rely on hand-crafted features. Their limited discriminative power results in unsatisfactory performance. To address this, a coarse-to-fine cotton segmentation method termed "DeepCotton" is proposed. It contains two modules, fully convolutional network (FCN) stream and interference region removal stream. First, FCN is employed to predict initially coarse map in an end-to-end manner. The convolutional networks involved in FCN guarantee powerful feature description capability, simultaneously, the regression analysis ability of neural network assures segmentation accuracy. To our knowledge, we are the first to introduce deep learning to IFC segmentation. Second, our proposed "UP" algorithm composed of unary brightness transformation and pairwise region comparison is used for obtaining interference map, which is executed to refine the coarse map. The experiments on constructed IFC dataset demonstrate that our method outperforms other state-of-the-art approaches, either in different common scenarios or single/multiple plants. More remarkable, the "UP" algorithm greatly improves the property of the coarse result, with the average amplifications of 2.6%, 2.4% on accuracy and 8.1%, 5.5% on intersection over union for common scenarios and multiple plants, separately.

  14. Convolutional neural networks for event-related potential detection: impact of the architecture.

    Science.gov (United States)

    Cecotti, H

    2017-07-01

    The detection of brain responses at the single-trial level in the electroencephalogram (EEG) such as event-related potentials (ERPs) is a difficult problem that requires different processing steps to extract relevant discriminant features. While most of the signal and classification techniques for the detection of brain responses are based on linear algebra, different pattern recognition techniques such as convolutional neural network (CNN), as a type of deep learning technique, have shown some interests as they are able to process the signal after limited pre-processing. In this study, we propose to investigate the performance of CNNs in relation of their architecture and in relation to how they are evaluated: a single system for each subject, or a system for all the subjects. More particularly, we want to address the change of performance that can be observed between specifying a neural network to a subject, or by considering a neural network for a group of subjects, taking advantage of a larger number of trials from different subjects. The results support the conclusion that a convolutional neural network trained on different subjects can lead to an AUC above 0.9 by using an appropriate architecture using spatial filtering and shift invariant layers.

  15. Forecasting Flare Activity Using Deep Convolutional Neural Networks

    Science.gov (United States)

    Hernandez, T.

    2017-12-01

    Current operational flare forecasting relies on human morphological analysis of active regions and the persistence of solar flare activity through time (i.e. that the Sun will continue to do what it is doing right now: flaring or remaining calm). In this talk we present the results of applying deep Convolutional Neural Networks (CNNs) to the problem of solar flare forecasting. CNNs operate by training a set of tunable spatial filters that, in combination with neural layer interconnectivity, allow CNNs to automatically identify significant spatial structures predictive for classification and regression problems. We will start by discussing the applicability and success rate of the approach, the advantages it has over non-automated forecasts, and how mining our trained neural network provides a fresh look into the mechanisms behind magnetic energy storage and release.

  16. Deep Fully Convolutional Networks for the Detection of Informal Settlements in VHR Images

    NARCIS (Netherlands)

    Persello, Claudio; Stein, Alfred

    2017-01-01

    This letter investigates fully convolutional networks (FCNs) for the detection of informal settlements in very high resolution (VHR) satellite images. Informal settlements or slums are proliferating in developing countries and their detection and classification provides vital information for

  17. Convolutional networks for fast, energy-efficient neuromorphic computing.

    Science.gov (United States)

    Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S

    2016-10-11

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

  18. Clinical Assistant Diagnosis for Electronic Medical Record Based on Convolutional Neural Network.

    Science.gov (United States)

    Yang, Zhongliang; Huang, Yongfeng; Jiang, Yiran; Sun, Yuxi; Zhang, Yu-Jin; Luo, Pengcheng

    2018-04-20

    Automatically extracting useful information from electronic medical records along with conducting disease diagnoses is a promising task for both clinical decision support(CDS) and neural language processing(NLP). Most of the existing systems are based on artificially constructed knowledge bases, and then auxiliary diagnosis is done by rule matching. In this study, we present a clinical intelligent decision approach based on Convolutional Neural Networks(CNN), which can automatically extract high-level semantic information of electronic medical records and then perform automatic diagnosis without artificial construction of rules or knowledge bases. We use collected 18,590 copies of the real-world clinical electronic medical records to train and test the proposed model. Experimental results show that the proposed model can achieve 98.67% accuracy and 96.02% recall, which strongly supports that using convolutional neural network to automatically learn high-level semantic features of electronic medical records and then conduct assist diagnosis is feasible and effective.

  19. Statistical mechanics of error-correcting codes

    Science.gov (United States)

    Kabashima, Y.; Saad, D.

    1999-01-01

    We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.

  20. Training strategy for convolutional neural networks in pedestrian gender classification

    Science.gov (United States)

    Ng, Choon-Boon; Tay, Yong-Haur; Goi, Bok-Min

    2017-06-01

    In this work, we studied a strategy for training a convolutional neural network in pedestrian gender classification with limited amount of labeled training data. Unsupervised learning by k-means clustering on pedestrian images was used to learn the filters to initialize the first layer of the network. As a form of pre-training, supervised learning for the related task of pedestrian classification was performed. Finally, the network was fine-tuned for gender classification. We found that this strategy improved the network's generalization ability in gender classification, achieving better test results when compared to random weights initialization and slightly more beneficial than merely initializing the first layer filters by unsupervised learning. This shows that unsupervised learning followed by pre-training with pedestrian images is an effective strategy to learn useful features for pedestrian gender classification.

  1. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  2. Joint Schemes for Physical Layer Security and Error Correction

    Science.gov (United States)

    Adamo, Oluwayomi

    2011-01-01

    The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A…

  3. A fast button surface defects detection method based on convolutional neural network

    Science.gov (United States)

    Liu, Lizhe; Cao, Danhua; Wu, Songlin; Wu, Yubin; Wei, Taoran

    2018-01-01

    Considering the complexity of the button surface texture and the variety of buttons and defects, we propose a fast visual method for button surface defect detection, based on convolutional neural network (CNN). CNN has the ability to extract the essential features by training, avoiding designing complex feature operators adapted to different kinds of buttons, textures and defects. Firstly, we obtain the normalized button region and then use HOG-SVM method to identify the front and back side of the button. Finally, a convolutional neural network is developed to recognize the defects. Aiming at detecting the subtle defects, we propose a network structure with multiple feature channels input. To deal with the defects of different scales, we take a strategy of multi-scale image block detection. The experimental results show that our method is valid for a variety of buttons and able to recognize all kinds of defects that have occurred, including dent, crack, stain, hole, wrong paint and uneven. The detection rate exceeds 96%, which is much better than traditional methods based on SVM and methods based on template match. Our method can reach the speed of 5 fps on DSP based smart camera with 600 MHz frequency.

  4. Automated segmentation of geographic atrophy using deep convolutional neural networks

    Science.gov (United States)

    Hu, Zhihong; Wang, Ziyuan; Sadda, SriniVas R.

    2018-02-01

    Geographic atrophy (GA) is an end-stage manifestation of the advanced age-related macular degeneration (AMD), the leading cause of blindness and visual impairment in developed nations. Techniques to rapidly and precisely detect and quantify GA would appear to be of critical importance in advancing the understanding of its pathogenesis. In this study, we develop an automated supervised classification system using deep convolutional neural networks (CNNs) for segmenting GA in fundus autofluorescene (FAF) images. More specifically, to enhance the contrast of GA relative to the background, we apply the contrast limited adaptive histogram equalization. Blood vessels may cause GA segmentation errors due to similar intensity level to GA. A tensor-voting technique is performed to identify the blood vessels and a vessel inpainting technique is applied to suppress the GA segmentation errors due to the blood vessels. To handle the large variation of GA lesion sizes, three deep CNNs with three varying sized input image patches are applied. Fifty randomly chosen FAF images are obtained from fifty subjects with GA. The algorithm-defined GA regions are compared with manual delineation by a certified grader. A two-fold cross-validation is applied to evaluate the algorithm performance. The mean segmentation accuracy, true positive rate (i.e. sensitivity), true negative rate (i.e. specificity), positive predictive value, false discovery rate, and overlap ratio, between the algorithm- and manually-defined GA regions are 0.97 +/- 0.02, 0.89 +/- 0.08, 0.98 +/- 0.02, 0.87 +/- 0.12, 0.13 +/- 0.12, and 0.79 +/- 0.12 respectively, demonstrating a high level of agreement.

  5. Method for decoupling error correction from privacy amplification

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Hoi-Kwong [Department of Electrical and Computer Engineering and Department of Physics, University of Toronto, 10 King' s College Road, Toronto, Ontario, Canada, M5S 3G4 (Canada)

    2003-04-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof.

  6. Method for decoupling error correction from privacy amplification

    International Nuclear Information System (INIS)

    Lo, Hoi-Kwong

    2003-01-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof

  7. LFNet: A Novel Bidirectional Recurrent Convolutional Neural Network for Light-Field Image Super-Resolution.

    Science.gov (United States)

    Wang, Yunlong; Liu, Fei; Zhang, Kunbo; Hou, Guangqi; Sun, Zhenan; Tan, Tieniu

    2018-09-01

    The low spatial resolution of light-field image poses significant difficulties in exploiting its advantage. To mitigate the dependency of accurate depth or disparity information as priors for light-field image super-resolution, we propose an implicitly multi-scale fusion scheme to accumulate contextual information from multiple scales for super-resolution reconstruction. The implicitly multi-scale fusion scheme is then incorporated into bidirectional recurrent convolutional neural network, which aims to iteratively model spatial relations between horizontally or vertically adjacent sub-aperture images of light-field data. Within the network, the recurrent convolutions are modified to be more effective and flexible in modeling the spatial correlations between neighboring views. A horizontal sub-network and a vertical sub-network of the same network structure are ensembled for final outputs via stacked generalization. Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems. Furthermore, the proposed method can enhance the performance of light field applications such as depth estimation.

  8. Deep Convolutional Neural Network-Based Early Automated Detection of Diabetic Retinopathy Using Fundus Image.

    Science.gov (United States)

    Xu, Kele; Feng, Dawei; Mi, Haibo

    2017-11-23

    The automatic detection of diabetic retinopathy is of vital importance, as it is the main cause of irreversible vision loss in the working-age population in the developed world. The early detection of diabetic retinopathy occurrence can be very helpful for clinical treatment; although several different feature extraction approaches have been proposed, the classification task for retinal images is still tedious even for those trained clinicians. Recently, deep convolutional neural networks have manifested superior performance in image classification compared to previous handcrafted feature-based image classification methods. Thus, in this paper, we explored the use of deep convolutional neural network methodology for the automatic classification of diabetic retinopathy using color fundus image, and obtained an accuracy of 94.5% on our dataset, outperforming the results obtained by using classical approaches.

  9. Error correction and statistical analyses for intra-host comparisons of feline immunodeficiency virus diversity from high-throughput sequencing data.

    Science.gov (United States)

    Liu, Yang; Chiaromonte, Francesca; Ross, Howard; Malhotra, Raunaq; Elleder, Daniel; Poss, Mary

    2015-06-30

    Infection with feline immunodeficiency virus (FIV) causes an immunosuppressive disease whose consequences are less severe if cats are co-infected with an attenuated FIV strain (PLV). We use virus diversity measurements, which reflect replication ability and the virus response to various conditions, to test whether diversity of virulent FIV in lymphoid tissues is altered in the presence of PLV. Our data consisted of the 3' half of the FIV genome from three tissues of animals infected with FIV alone, or with FIV and PLV, sequenced by 454 technology. Since rare variants dominate virus populations, we had to carefully distinguish sequence variation from errors due to experimental protocols and sequencing. We considered an exponential-normal convolution model used for background correction of microarray data, and modified it to formulate an error correction approach for minor allele frequencies derived from high-throughput sequencing. Similar to accounting for over-dispersion in counts, this accounts for error-inflated variability in frequencies - and quite effectively reproduces empirically observed distributions. After obtaining error-corrected minor allele frequencies, we applied ANalysis Of VAriance (ANOVA) based on a linear mixed model and found that conserved sites and transition frequencies in FIV genes differ among tissues of dual and single infected cats. Furthermore, analysis of minor allele frequencies at individual FIV genome sites revealed 242 sites significantly affected by infection status (dual vs. single) or infection status by tissue interaction. All together, our results demonstrated a decrease in FIV diversity in bone marrow in the presence of PLV. Importantly, these effects were weakened or undetectable when error correction was performed with other approaches (thresholding of minor allele frequencies; probabilistic clustering of reads). We also queried the data for cytidine deaminase activity on the viral genome, which causes an asymmetric increase

  10. Joint multiple fully connected convolutional neural network with extreme learning machine for hepatocellular carcinoma nuclei grading.

    Science.gov (United States)

    Li, Siqi; Jiang, Huiyan; Pang, Wenbo

    2017-05-01

    Accurate cell grading of cancerous tissue pathological image is of great importance in medical diagnosis and treatment. This paper proposes a joint multiple fully connected convolutional neural network with extreme learning machine (MFC-CNN-ELM) architecture for hepatocellular carcinoma (HCC) nuclei grading. First, in preprocessing stage, each grayscale image patch with the fixed size is obtained using center-proliferation segmentation (CPS) method and the corresponding labels are marked under the guidance of three pathologists. Next, a multiple fully connected convolutional neural network (MFC-CNN) is designed to extract the multi-form feature vectors of each input image automatically, which considers multi-scale contextual information of deep layer maps sufficiently. After that, a convolutional neural network extreme learning machine (CNN-ELM) model is proposed to grade HCC nuclei. Finally, a back propagation (BP) algorithm, which contains a new up-sample method, is utilized to train MFC-CNN-ELM architecture. The experiment comparison results demonstrate that our proposed MFC-CNN-ELM has superior performance compared with related works for HCC nuclei grading. Meanwhile, external validation using ICPR 2014 HEp-2 cell dataset shows the good generalization of our MFC-CNN-ELM architecture. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. A Deep Convolutional Coupling Network for Change Detection Based on Heterogeneous Optical and Radar Images.

    Science.gov (United States)

    Liu, Jia; Gong, Maoguo; Qin, Kai; Zhang, Puzhao

    2018-03-01

    We propose an unsupervised deep convolutional coupling network for change detection based on two heterogeneous images acquired by optical sensors and radars on different dates. Most existing change detection methods are based on homogeneous images. Due to the complementary properties of optical and radar sensors, there is an increasing interest in change detection based on heterogeneous images. The proposed network is symmetric with each side consisting of one convolutional layer and several coupling layers. The two input images connected with the two sides of the network, respectively, are transformed into a feature space where their feature representations become more consistent. In this feature space, the different map is calculated, which then leads to the ultimate detection map by applying a thresholding algorithm. The network parameters are learned by optimizing a coupling function. The learning process is unsupervised, which is different from most existing change detection methods based on heterogeneous images. Experimental results on both homogenous and heterogeneous images demonstrate the promising performance of the proposed network compared with several existing approaches.

  12. Efficient error correction for next-generation sequencing of viral amplicons.

    Science.gov (United States)

    Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury

    2012-06-25

    Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.

  13. Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.

    Science.gov (United States)

    Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong

    Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep

  14. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Benoit Macq

    2008-07-01

    Full Text Available Based on the analysis of real mobile ad hoc network (MANET traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS to wireless clients is demonstrated.

  15. CORRECTING ERRORS: THE RELATIVE EFFICACY OF DIFFERENT FORMS OF ERROR FEEDBACK IN SECOND LANGUAGE WRITING

    Directory of Open Access Journals (Sweden)

    Chitra Jayathilake

    2013-01-01

    Full Text Available Error correction in ESL (English as a Second Language classes has been a focal phenomenon in SLA (Second Language Acquisition research due to some controversial research results and diverse feedback practices. This paper presents a study which explored the relative efficacy of three forms of error correction employed in ESL writing classes: focusing on the acquisition of one grammar element both for immediate and delayed language contexts, and collecting data from university undergraduates, this study employed an experimental research design with a pretest-treatment-posttests structure. The research revealed that the degree of success in acquiring L2 (Second Language grammar through error correction differs according to the form of the correction and to learning contexts. While the findings are discussed in relation to the previous literature, this paper concludes creating a cline of error correction forms to be promoted in Sri Lankan L2 writing contexts, particularly in ESL contexts in Universities.

  16. REAL-TIME VIDEO SCALING BASED ON CONVOLUTION NEURAL NETWORK ARCHITECTURE

    OpenAIRE

    S Safinaz; A V Ravi Kumar

    2017-01-01

    In recent years, video super resolution techniques becomes mandatory requirements to get high resolution videos. Many super resolution techniques researched but still video super resolution or scaling is a vital challenge. In this paper, we have presented a real-time video scaling based on convolution neural network architecture to eliminate the blurriness in the images and video frames and to provide better reconstruction quality while scaling of large datasets from lower resolution frames t...

  17. Deep-HiTS: Rotation Invariant Convolutional Neural Network for Transient Detection

    Science.gov (United States)

    Cabrera-Vives, Guillermo; Reyes, Ignacio; Förster, Francisco; Estévez, Pablo A.; Maureira, Juan-Carlos

    2017-02-01

    We introduce Deep-HiTS, a rotation-invariant convolutional neural network (CNN) model for classifying images of transient candidates into artifacts or real sources for the High cadence Transient Survey (HiTS). CNNs have the advantage of learning the features automatically from the data while achieving high performance. We compare our CNN model against a feature engineering approach using random forests (RFs). We show that our CNN significantly outperforms the RF model, reducing the error by almost half. Furthermore, for a fixed number of approximately 2000 allowed false transient candidates per night, we are able to reduce the misclassified real transients by approximately one-fifth. To the best of our knowledge, this is the first time CNNs have been used to detect astronomical transient events. Our approach will be very useful when processing images from next generation instruments such as the Large Synoptic Survey Telescope. We have made all our code and data available to the community for the sake of allowing further developments and comparisons at https://github.com/guille-c/Deep-HiTS. Deep-HiTS is licensed under the terms of the GNU General Public License v3.0.

  18. Salient regions detection using convolutional neural networks and color volume

    Science.gov (United States)

    Liu, Guang-Hai; Hou, Yingkun

    2018-03-01

    Convolutional neural network is an important technique in machine learning, pattern recognition and image processing. In order to reduce the computational burden and extend the classical LeNet-5 model to the field of saliency detection, we propose a simple and novel computing model based on LeNet-5 network. In the proposed model, hue, saturation and intensity are utilized to extract depth cues, and then we integrate depth cues and color volume to saliency detection following the basic structure of the feature integration theory. Experimental results show that the proposed computing model outperforms some existing state-of-the-art methods on MSRA1000 and ECSSD datasets.

  19. Repeat-aware modeling and correction of short read errors.

    Science.gov (United States)

    Yang, Xiao; Aluru, Srinivas; Dorman, Karin S

    2011-02-15

    High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors

  20. Forged Signature Distinction Using Convolutional Neural Network for Feature Extraction

    Directory of Open Access Journals (Sweden)

    Seungsoo Nam

    2018-01-01

    Full Text Available This paper proposes a dynamic verification scheme for finger-drawn signatures in smartphones. As a dynamic feature, the movement of a smartphone is recorded with accelerometer sensors in the smartphone, in addition to the moving coordinates of the signature. To extract high-level longitudinal and topological features, the proposed scheme uses a convolution neural network (CNN for feature extraction, and not as a conventional classifier. We assume that a CNN trained with forged signatures can extract effective features (called S-vector, which are common in forging activities such as hesitation and delay before drawing the complicated part. The proposed scheme also exploits an autoencoder (AE as a classifier, and the S-vector is used as the input vector to the AE. An AE has high accuracy for the one-class distinction problem such as signature verification, and is also greatly dependent on the accuracy of input data. S-vector is valuable as the input of AE, and, consequently, could lead to improved verification accuracy especially for distinguishing forged signatures. Compared to the previous work, i.e., the MLP-based finger-drawn signature verification scheme, the proposed scheme decreases the equal error rate by 13.7%, specifically, from 18.1% to 4.4%, for discriminating forged signatures.

  1. Classification of breast cancer cytological specimen using convolutional neural network

    Science.gov (United States)

    Żejmo, Michał; Kowal, Marek; Korbicz, Józef; Monczak, Roman

    2017-01-01

    The paper presents a deep learning approach for automatic classification of breast tumors based on fine needle cytology. The main aim of the system is to distinguish benign from malignant cases based on microscopic images. Experiment was carried out on cytological samples derived from 50 patients (25 benign cases + 25 malignant cases) diagnosed in Regional Hospital in Zielona Góra. To classify microscopic images, we used convolutional neural networks (CNN) of two types: GoogLeNet and AlexNet. Due to the very large size of images of cytological specimen (on average 200000 × 100000 pixels), they were divided into smaller patches of size 256 × 256 pixels. Breast cancer classification usually is based on morphometric features of nuclei. Therefore, training and validation patches were selected using Support Vector Machine (SVM) so that suitable amount of cell material was depicted. Neural classifiers were tuned using GPU accelerated implementation of gradient descent algorithm. Training error was defined as a cross-entropy classification loss. Classification accuracy was defined as the percentage ratio of successfully classified validation patches to the total number of validation patches. The best accuracy rate of 83% was obtained by GoogLeNet model. We observed that more misclassified patches belong to malignant cases.

  2. Understanding the Convolutional Neural Networks with Gradient Descent and Backpropagation

    Science.gov (United States)

    Zhou, XueFei

    2018-04-01

    With the development of computer technology, the applications of machine learning are more and more extensive. And machine learning is providing endless opportunities to develop new applications. One of those applications is image recognition by using Convolutional Neural Networks (CNNs). CNN is one of the most common algorithms in image recognition. It is significant to understand its theory and structure for every scholar who is interested in this field. CNN is mainly used in computer identification, especially in voice, text recognition and other aspects of the application. It utilizes hierarchical structure with different layers to accelerate computing speed. In addition, the greatest features of CNNs are the weight sharing and dimension reduction. And all of these consolidate the high effectiveness and efficiency of CNNs with idea computing speed and error rate. With the help of other learning altruisms, CNNs could be used in several scenarios for machine learning, especially for deep learning. Based on the general introduction to the background and the core solution CNN, this paper is going to focus on summarizing how Gradient Descent and Backpropagation work, and how they contribute to the high performances of CNNs. Also, some practical applications will be discussed in the following parts. The last section exhibits the conclusion and some perspectives of future work.

  3. Texture synthesis using convolutional neural networks with long-range consistency and spectral constraints

    NARCIS (Netherlands)

    Schreiber, Shaun; Geldenhuys, Jaco; Villiers, De Hendrik

    2017-01-01

    Procedural texture generation enables the creation of more rich and detailed virtual environments without the help of an artist. However, finding a flexible generative model of real world textures remains an open problem. We present a novel Convolutional Neural Network based texture model

  4. Alcoholism Detection by Data Augmentation and Convolutional Neural Network with Stochastic Pooling.

    Science.gov (United States)

    Wang, Shui-Hua; Lv, Yi-Ding; Sui, Yuxiu; Liu, Shuai; Wang, Su-Jing; Zhang, Yu-Dong

    2017-11-17

    Alcohol use disorder (AUD) is an important brain disease. It alters the brain structure. Recently, scholars tend to use computer vision based techniques to detect AUD. We collected 235 subjects, 114 alcoholic and 121 non-alcoholic. Among the 235 image, 100 images were used as training set, and data augmentation method was used. The rest 135 images were used as test set. Further, we chose the latest powerful technique-convolutional neural network (CNN) based on convolutional layer, rectified linear unit layer, pooling layer, fully connected layer, and softmax layer. We also compared three different pooling techniques: max pooling, average pooling, and stochastic pooling. The results showed that our method achieved a sensitivity of 96.88%, a specificity of 97.18%, and an accuracy of 97.04%. Our method was better than three state-of-the-art approaches. Besides, stochastic pooling performed better than other max pooling and average pooling. We validated CNN with five convolution layers and two fully connected layers performed the best. The GPU yielded a 149× acceleration in training and a 166× acceleration in test, compared to CPU.

  5. Quantum mean-field decoding algorithm for error-correcting codes

    International Nuclear Information System (INIS)

    Inoue, Jun-ichi; Saika, Yohei; Okada, Masato

    2009-01-01

    We numerically examine a quantum version of TAP (Thouless-Anderson-Palmer)-like mean-field algorithm for the problem of error-correcting codes. For a class of the so-called Sourlas error-correcting codes, we check the usefulness to retrieve the original bit-sequence (message) with a finite length. The decoding dynamics is derived explicitly and we evaluate the average-case performance through the bit-error rate (BER).

  6. Entanglement renormalization, quantum error correction, and bulk causality

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Isaac H. [IBM T.J. Watson Research Center,1101 Kitchawan Rd., Yorktown Heights, NY (United States); Kastoryano, Michael J. [NBIA, Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, 2100 Copenhagen (Denmark)

    2017-04-07

    Entanglement renormalization can be viewed as an encoding circuit for a family of approximate quantum error correcting codes. The logical information becomes progressively more well-protected against erasure errors at larger length scales. In particular, an approximate variant of holographic quantum error correcting code emerges at low energy for critical systems. This implies that two operators that are largely separated in scales behave as if they are spatially separated operators, in the sense that they obey a Lieb-Robinson type locality bound under a time evolution generated by a local Hamiltonian.

  7. Recurrent Spatial Transformer Networks

    DEFF Research Database (Denmark)

    Sønderby, Søren Kaae; Sønderby, Casper Kaae; Maaløe, Lars

    2015-01-01

    We integrate the recently proposed spatial transformer network (SPN) [Jaderberg et. al 2015] into a recurrent neural network (RNN) to form an RNN-SPN model. We use the RNN-SPN to classify digits in cluttered MNIST sequences. The proposed model achieves a single digit error of 1.5% compared to 2.......9% for a convolutional networks and 2.0% for convolutional networks with SPN layers. The SPN outputs a zoomed, rotated and skewed version of the input image. We investigate different down-sampling factors (ratio of pixel in input and output) for the SPN and show that the RNN-SPN model is able to down-sample the input...

  8. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  9. Volterra Filtering for ADC Error Correction

    Directory of Open Access Journals (Sweden)

    J. Saliga

    2001-09-01

    Full Text Available Dynamic non-linearity of analog-to-digital converters (ADCcontributes significantly to the distortion of digitized signals. Thispaper introduces a new effective method for compensation such adistortion based on application of Volterra filtering. Considering ana-priori error model of ADC allows finding an efficient inverseVolterra model for error correction. Efficiency of proposed method isdemonstrated on experimental results.

  10. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    Science.gov (United States)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  11. Dual Temporal Scale Convolutional Neural Network for Micro-Expression Recognition

    Directory of Open Access Journals (Sweden)

    Min Peng

    2017-10-01

    Full Text Available Facial micro-expression is a brief involuntary facial movement and can reveal the genuine emotion that people try to conceal. Traditional methods of spontaneous micro-expression recognition rely excessively on sophisticated hand-crafted feature design and the recognition rate is not high enough for its practical application. In this paper, we proposed a Dual Temporal Scale Convolutional Neural Network (DTSCNN for spontaneous micro-expressions recognition. The DTSCNN is a two-stream network. Different of stream of DTSCNN is used to adapt to different frame rate of micro-expression video clips. Each stream of DSTCNN consists of independent shallow network for avoiding the overfitting problem. Meanwhile, we fed the networks with optical-flow sequences to ensure that the shallow networks can further acquire higher-level features. Experimental results on spontaneous micro-expression databases (CASME I/II showed that our method can achieve a recognition rate almost 10% higher than what some state-of-the-art method can achieve.

  12. Dual Temporal Scale Convolutional Neural Network for Micro-Expression Recognition.

    Science.gov (United States)

    Peng, Min; Wang, Chongyang; Chen, Tong; Liu, Guangyuan; Fu, Xiaolan

    2017-01-01

    Facial micro-expression is a brief involuntary facial movement and can reveal the genuine emotion that people try to conceal. Traditional methods of spontaneous micro-expression recognition rely excessively on sophisticated hand-crafted feature design and the recognition rate is not high enough for its practical application. In this paper, we proposed a Dual Temporal Scale Convolutional Neural Network (DTSCNN) for spontaneous micro-expressions recognition. The DTSCNN is a two-stream network. Different of stream of DTSCNN is used to adapt to different frame rate of micro-expression video clips. Each stream of DSTCNN consists of independent shallow network for avoiding the overfitting problem. Meanwhile, we fed the networks with optical-flow sequences to ensure that the shallow networks can further acquire higher-level features. Experimental results on spontaneous micro-expression databases (CASME I/II) showed that our method can achieve a recognition rate almost 10% higher than what some state-of-the-art method can achieve.

  13. Dual Temporal Scale Convolutional Neural Network for Micro-Expression Recognition

    Science.gov (United States)

    Peng, Min; Wang, Chongyang; Chen, Tong; Liu, Guangyuan; Fu, Xiaolan

    2017-01-01

    Facial micro-expression is a brief involuntary facial movement and can reveal the genuine emotion that people try to conceal. Traditional methods of spontaneous micro-expression recognition rely excessively on sophisticated hand-crafted feature design and the recognition rate is not high enough for its practical application. In this paper, we proposed a Dual Temporal Scale Convolutional Neural Network (DTSCNN) for spontaneous micro-expressions recognition. The DTSCNN is a two-stream network. Different of stream of DTSCNN is used to adapt to different frame rate of micro-expression video clips. Each stream of DSTCNN consists of independent shallow network for avoiding the overfitting problem. Meanwhile, we fed the networks with optical-flow sequences to ensure that the shallow networks can further acquire higher-level features. Experimental results on spontaneous micro-expression databases (CASME I/II) showed that our method can achieve a recognition rate almost 10% higher than what some state-of-the-art method can achieve. PMID:29081753

  14. Detecting and correcting partial errors: Evidence for efficient control without conscious access.

    Science.gov (United States)

    Rochet, N; Spieser, L; Casini, L; Hasbroucq, T; Burle, B

    2014-09-01

    Appropriate reactions to erroneous actions are essential to keeping behavior adaptive. Erring, however, is not an all-or-none process: electromyographic (EMG) recordings of the responding muscles have revealed that covert incorrect response activations (termed "partial errors") occur on a proportion of overtly correct trials. The occurrence of such "partial errors" shows that incorrect response activations could be corrected online, before turning into overt errors. In the present study, we showed that, unlike overt errors, such "partial errors" are poorly consciously detected by participants, who could report only one third of their partial errors. Two parameters of the partial errors were found to predict detection: the surface of the incorrect EMG burst (larger for detected) and the correction time (between the incorrect and correct EMG onsets; longer for detected). These two parameters provided independent information. The correct(ive) responses associated with detected partial errors were larger than the "pure-correct" ones, and this increase was likely a consequence, rather than a cause, of the detection. The respective impacts of the two parameters predicting detection (incorrect surface and correction time), along with the underlying physiological processes subtending partial-error detection, are discussed.

  15. Chinese Sentence Classification Based on Convolutional Neural Network

    Science.gov (United States)

    Gu, Chengwei; Wu, Ming; Zhang, Chuang

    2017-10-01

    Sentence classification is one of the significant issues in Natural Language Processing (NLP). Feature extraction is often regarded as the key point for natural language processing. Traditional ways based on machine learning can not take high level features into consideration, such as Naive Bayesian Model. The neural network for sentence classification can make use of contextual information to achieve greater results in sentence classification tasks. In this paper, we focus on classifying Chinese sentences. And the most important is that we post a novel architecture of Convolutional Neural Network (CNN) to apply on Chinese sentence classification. In particular, most of the previous methods often use softmax classifier for prediction, we embed a linear support vector machine to substitute softmax in the deep neural network model, minimizing a margin-based loss to get a better result. And we use tanh as an activation function, instead of ReLU. The CNN model improve the result of Chinese sentence classification tasks. Experimental results on the Chinese news title database validate the effectiveness of our model.

  16. Convolutional networks for fast, energy-efficient neuromorphic computing

    Science.gov (United States)

    Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.

    2016-01-01

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489

  17. Virus Particle Detection by Convolutional Neural Network in Transmission Electron Microscopy Images.

    Science.gov (United States)

    Ito, Eisuke; Sato, Takaaki; Sano, Daisuke; Utagawa, Etsuko; Kato, Tsuyoshi

    2018-06-01

    A new computational method for the detection of virus particles in transmission electron microscopy (TEM) images is presented. Our approach is to use a convolutional neural network that transforms a TEM image to a probabilistic map that indicates where virus particles exist in the image. Our proposed approach automatically and simultaneously learns both discriminative features and classifier for virus particle detection by machine learning, in contrast to existing methods that are based on handcrafted features that yield many false positives and require several postprocessing steps. The detection performance of the proposed method was assessed against a dataset of TEM images containing feline calicivirus particles and compared with several existing detection methods, and the state-of-the-art performance of the developed method for detecting virus was demonstrated. Since our method is based on supervised learning that requires both the input images and their corresponding annotations, it is basically used for detection of already-known viruses. However, the method is highly flexible, and the convolutional networks can adapt themselves to any virus particles by learning automatically from an annotated dataset.

  18. Phase transitions in glassy systems via convolutional neural networks

    Science.gov (United States)

    Fang, Chao

    Machine learning is a powerful approach commonplace in industry to tackle large data sets. Most recently, it has found its way into condensed matter physics, allowing for the first time the study of, e.g., topological phase transitions and strongly-correlated electron systems. The study of spin glasses is plagued by finite-size effects due to the long thermalization times needed. Here we use convolutional neural networks in an attempt to detect a phase transition in three-dimensional Ising spin glasses. Our results are compared to traditional approaches.

  19. Weed Growth Stage Estimator Using Deep Convolutional Neural Networks

    DEFF Research Database (Denmark)

    Teimouri, Nima; Dyrmann, Mads; Nielsen, Per Rydahl

    2018-01-01

    This study outlines a new method of automatically estimating weed species and growth stages (from cotyledon until eight leaves are visible) of in situ images covering 18 weed species or families. Images of weeds growing within a variety of crops were gathered across variable environmental conditi...... in estimating the number of leaves and 96% accuracy when accepting a deviation of two leaves. These results show that this new method of using deep convolutional neural networks has a relatively high ability to estimate early growth stages across a wide variety of weed species....

  20. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    Directory of Open Access Journals (Sweden)

    Francisco Javier Ordóñez

    2016-01-01

    Full Text Available Human activity recognition (HAR tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i is suitable for multimodal wearable sensors; (ii can perform sensor fusion naturally; (iii does not require expert knowledge in designing features; and (iv explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation.

  1. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.

    Science.gov (United States)

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-18

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation.

  2. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    Science.gov (United States)

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612

  3. Automatic Quality Assessment of Echocardiograms Using Convolutional Neural Networks: Feasibility on the Apical Four-Chamber View.

    Science.gov (United States)

    Abdi, Amir H; Luong, Christina; Tsang, Teresa; Allan, Gregory; Nouranian, Saman; Jue, John; Hawley, Dale; Fleming, Sarah; Gin, Ken; Swift, Jody; Rohling, Robert; Abolmaesumi, Purang

    2017-06-01

    Echocardiography (echo) is a skilled technical procedure that depends on the experience of the operator. The aim of this paper is to reduce user variability in data acquisition by automatically computing a score of echo quality for operator feedback. To do this, a deep convolutional neural network model, trained on a large set of samples, was developed for scoring apical four-chamber (A4C) echo. In this paper, 6,916 end-systolic echo images were manually studied by an expert cardiologist and were assigned a score between 0 (not acceptable) and 5 (excellent). The images were divided into two independent training-validation and test sets. The network architecture and its parameters were based on the stochastic approach of the particle swarm optimization on the training-validation data. The mean absolute error between the scores from the ultimately trained model and the expert's manual scores was 0.71 ± 0.58. The reported error was comparable to the measured intra-rater reliability. The learned features of the network were visually interpretable and could be mapped to the anatomy of the heart in the A4C echo, giving confidence in the training result. The computation time for the proposed network architecture, running on a graphics processing unit, was less than 10 ms per frame, sufficient for real-time deployment. The proposed approach has the potential to facilitate the widespread use of echo at the point-of-care and enable early and timely diagnosis and treatment. Finally, the approach did not use any specific assumptions about the A4C echo, so it could be generalizable to other standard echo views.

  4. Wavelet-enhanced convolutional neural network: a new idea in a deep learning paradigm.

    Science.gov (United States)

    Savareh, Behrouz Alizadeh; Emami, Hassan; Hajiabadi, Mohamadreza; Azimi, Seyed Majid; Ghafoori, Mahyar

    2018-05-29

    Manual brain tumor segmentation is a challenging task that requires the use of machine learning techniques. One of the machine learning techniques that has been given much attention is the convolutional neural network (CNN). The performance of the CNN can be enhanced by combining other data analysis tools such as wavelet transform. In this study, one of the famous implementations of CNN, a fully convolutional network (FCN), was used in brain tumor segmentation and its architecture was enhanced by wavelet transform. In this combination, a wavelet transform was used as a complementary and enhancing tool for CNN in brain tumor segmentation. Comparing the performance of basic FCN architecture against the wavelet-enhanced form revealed a remarkable superiority of enhanced architecture in brain tumor segmentation tasks. Using mathematical functions and enhancing tools such as wavelet transform and other mathematical functions can improve the performance of CNN in any image processing task such as segmentation and classification.

  5. Production-Level Facial Performance Capture Using Deep Convolutional Neural Networks

    OpenAIRE

    Laine, Samuli; Karras, Tero; Aila, Timo; Herva, Antti; Saito, Shunsuke; Yu, Ronald; Li, Hao; Lehtinen, Jaakko

    2016-01-01

    We present a real-time deep learning framework for video-based facial performance capture -- the dense 3D tracking of an actor's face given a monocular video. Our pipeline begins with accurately capturing a subject using a high-end production facial capture pipeline based on multi-view stereo tracking and artist-enhanced animations. With 5-10 minutes of captured footage, we train a convolutional neural network to produce high-quality output, including self-occluded regions, from a monocular v...

  6. Phase correction and error estimation in InSAR time series analysis

    Science.gov (United States)

    Zhang, Y.; Fattahi, H.; Amelung, F.

    2017-12-01

    During the last decade several InSAR time series approaches have been developed in response to the non-idea acquisition strategy of SAR satellites, such as large spatial and temporal baseline with non-regular acquisitions. The small baseline tubes and regular acquisitions of new SAR satellites such as Sentinel-1 allows us to form fully connected networks of interferograms and simplifies the time series analysis into a weighted least square inversion of an over-determined system. Such robust inversion allows us to focus more on the understanding of different components in InSAR time-series and its uncertainties. We present an open-source python-based package for InSAR time series analysis, called PySAR (https://yunjunz.github.io/PySAR/), with unique functionalities for obtaining unbiased ground displacement time-series, geometrical and atmospheric correction of InSAR data and quantifying the InSAR uncertainty. Our implemented strategy contains several features including: 1) improved spatial coverage using coherence-based network of interferograms, 2) unwrapping error correction using phase closure or bridging, 3) tropospheric delay correction using weather models and empirical approaches, 4) DEM error correction, 5) optimal selection of reference date and automatic outlier detection, 6) InSAR uncertainty due to the residual tropospheric delay, decorrelation and residual DEM error, and 7) variance-covariance matrix of final products for geodetic inversion. We demonstrate the performance using SAR datasets acquired by Cosmo-Skymed and TerraSAR-X, Sentinel-1 and ALOS/ALOS-2, with application on the highly non-linear volcanic deformation in Japan and Ecuador (figure 1). Our result shows precursory deformation before the 2015 eruptions of Cotopaxi volcano, with a maximum uplift of 3.4 cm on the western flank (fig. 1b), with a standard deviation of 0.9 cm (fig. 1a), supporting the finding by Morales-Rivera et al. (2017, GRL); and a post-eruptive subsidence on the same

  7. Classifying magnetic resonance image modalities with convolutional neural networks

    Science.gov (United States)

    Remedios, Samuel; Pham, Dzung L.; Butman, John A.; Roy, Snehashis

    2018-02-01

    Magnetic Resonance (MR) imaging allows the acquisition of images with different contrast properties depending on the acquisition protocol and the magnetic properties of tissues. Many MR brain image processing techniques, such as tissue segmentation, require multiple MR contrasts as inputs, and each contrast is treated differently. Thus it is advantageous to automate the identification of image contrasts for various purposes, such as facilitating image processing pipelines, and managing and maintaining large databases via content-based image retrieval (CBIR). Most automated CBIR techniques focus on a two-step process: extracting features from data and classifying the image based on these features. We present a novel 3D deep convolutional neural network (CNN)- based method for MR image contrast classification. The proposed CNN automatically identifies the MR contrast of an input brain image volume. Specifically, we explored three classification problems: (1) identify T1-weighted (T1-w), T2-weighted (T2-w), and fluid-attenuated inversion recovery (FLAIR) contrasts, (2) identify pre vs postcontrast T1, (3) identify pre vs post-contrast FLAIR. A total of 3418 image volumes acquired from multiple sites and multiple scanners were used. To evaluate each task, the proposed model was trained on 2137 images and tested on the remaining 1281 images. Results showed that image volumes were correctly classified with 97.57% accuracy.

  8. Photon beam convolution using polyenergetic energy deposition kernels

    International Nuclear Information System (INIS)

    Hoban, P.W.; Murray, D.C.; Round, W.H.

    1994-01-01

    In photon beam convolution calculations where polyenergetic energy deposition kernels (EDKs) are used, the primary photon energy spectrum should be correctly accounted for in Monte Carlo generation of EDKs. This requires the probability of interaction, determined by the linear attenuation coefficient, μ, to be taken into account when primary photon interactions are forced to occur at the EDK origin. The use of primary and scattered EDKs generated with a fixed photon spectrum can give rise to an error in the dose calculation due to neglecting the effects of beam hardening with depth. The proportion of primary photon energy that is transferred to secondary electrons increases with depth of interaction, due to the increase in the ratio μ ab /μ as the beam hardens. Convolution depth-dose curves calculated using polyenergetic EDKs generated for the primary photon spectra which exist at depths of 0, 20 and 40 cm in water, show a fall-off which is too steep when compared with EGS4 Monte Carlo results. A beam hardening correction factor applied to primary and scattered 0 cm EDKs, based on the ratio of kerma to terma at each depth, gives primary, scattered and total dose in good agreement with Monte Carlo results. (Author)

  9. A Parallel Strategy for Convolutional Neural Network Based on Heterogeneous Cluster for Mobile Information System

    Directory of Open Access Journals (Sweden)

    Jilin Zhang

    2017-01-01

    Full Text Available With the development of the mobile systems, we gain a lot of benefits and convenience by leveraging mobile devices; at the same time, the information gathered by smartphones, such as location and environment, is also valuable for business to provide more intelligent services for customers. More and more machine learning methods have been used in the field of mobile information systems to study user behavior and classify usage patterns, especially convolutional neural network. With the increasing of model training parameters and data scale, the traditional single machine training method cannot meet the requirements of time complexity in practical application scenarios. The current training framework often uses simple data parallel or model parallel method to speed up the training process, which is why heterogeneous computing resources have not been fully utilized. To solve these problems, our paper proposes a delay synchronization convolutional neural network parallel strategy, which leverages the heterogeneous system. The strategy is based on both synchronous parallel and asynchronous parallel approaches; the model training process can reduce the dependence on the heterogeneous architecture in the premise of ensuring the model convergence, so the convolution neural network framework is more adaptive to different heterogeneous system environments. The experimental results show that the proposed delay synchronization strategy can achieve at least three times the speedup compared to the traditional data parallelism.

  10. Quantum algorithms and quantum maps - implementation and error correction

    International Nuclear Information System (INIS)

    Alber, G.; Shepelyansky, D.

    2005-01-01

    Full text: We investigate the dynamics of the quantum tent map under the influence of errors and explore the possibilities of quantum error correcting methods for the purpose of stabilizing this quantum algorithm. It is known that static but uncontrollable inter-qubit couplings between the qubits of a quantum information processor lead to a rapid Gaussian decay of the fidelity of the quantum state. We present a new error correcting method which slows down this fidelity decay to a linear-in-time exponential one. One of its advantages is that it does not require redundancy so that all physical qubits involved can be used for logical purposes. We also study the influence of decoherence due to spontaneous decay processes which can be corrected by quantum jump-codes. It is demonstrated how universal encoding can be performed in these code spaces. For this purpose we discuss a new entanglement gate which can be used for lowest level encoding in concatenated error-correcting architectures. (author)

  11. Do Convolutional Neural Networks Learn Class Hierarchy?

    Science.gov (United States)

    Bilal, Alsallakh; Jourabloo, Amin; Ye, Mao; Liu, Xiaoming; Ren, Liu

    2018-01-01

    Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data.

  12. Linear-regression convolutional neural network for fully automated coronary lumen segmentation in intravascular optical coherence tomography

    Science.gov (United States)

    Yong, Yan Ling; Tan, Li Kuo; McLaughlin, Robert A.; Chee, Kok Han; Liew, Yih Miin

    2017-12-01

    Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame.

  13. Controlling qubit drift by recycling error correction syndromes

    Science.gov (United States)

    Blume-Kohout, Robin

    2015-03-01

    Physical qubits are susceptible to systematic drift, above and beyond the stochastic Markovian noise that motivates quantum error correction. This parameter drift must be compensated - if it is ignored, error rates will rise to intolerable levels - but compensation requires knowing the parameters' current value, which appears to require halting experimental work to recalibrate (e.g. via quantum tomography). Fortunately, this is untrue. I show how to perform on-the-fly recalibration on the physical qubits in an error correcting code, using only information from the error correction syndromes. The algorithm for detecting and compensating drift is very simple - yet, remarkably, when used to compensate Brownian drift in the qubit Hamiltonian, it achieves a stabilized error rate very close to the theoretical lower bound. Against 1/f noise, it is less effective only because 1/f noise is (like white noise) dominated by high-frequency fluctuations that are uncompensatable. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE

  14. Energy efficiency of error correction on wireless systems

    NARCIS (Netherlands)

    Havinga, Paul J.M.

    1999-01-01

    Since high error rates are inevitable to the wireless environment, energy-efficient error-control is an important issue for mobile computing systems. We have studied the energy efficiency of two different error correction mechanisms and have measured the efficiency of an implementation in software.

  15. Radio frequency interference mitigation using deep convolutional neural networks

    Science.gov (United States)

    Akeret, J.; Chang, C.; Lucchi, A.; Refregier, A.

    2017-01-01

    We propose a novel approach for mitigating radio frequency interference (RFI) signals in radio data using the latest advances in deep learning. We employ a special type of Convolutional Neural Network, the U-Net, that enables the classification of clean signal and RFI signatures in 2D time-ordered data acquired from a radio telescope. We train and assess the performance of this network using the HIDE &SEEK radio data simulation and processing packages, as well as early Science Verification data acquired with the 7m single-dish telescope at the Bleien Observatory. We find that our U-Net implementation is showing competitive accuracy to classical RFI mitigation algorithms such as SEEK's SUMTHRESHOLD implementation. We publish our U-Net software package on GitHub under GPLv3 license.

  16. Accounting for optical errors in microtensiometry.

    Science.gov (United States)

    Hinton, Zachary R; Alvarez, Nicolas J

    2018-09-15

    Drop shape analysis (DSA) techniques measure interfacial tension subject to error in image analysis and the optical system. While considerable efforts have been made to minimize image analysis errors, very little work has treated optical errors. There are two main sources of error when considering the optical system: the angle of misalignment and the choice of focal plane. Due to the convoluted nature of these sources, small angles of misalignment can lead to large errors in measured curvature. We demonstrate using microtensiometry the contributions of these sources to measured errors in radius, and, more importantly, deconvolute the effects of misalignment and focal plane. Our findings are expected to have broad implications on all optical techniques measuring interfacial curvature. A geometric model is developed to analytically determine the contributions of misalignment angle and choice of focal plane on measurement error for spherical cap interfaces. This work utilizes a microtensiometer to validate the geometric model and to quantify the effect of both sources of error. For the case of a microtensiometer, an empirical calibration is demonstrated that corrects for optical errors and drastically simplifies implementation. The combination of geometric modeling and experimental results reveal a convoluted relationship between the true and measured interfacial radius as a function of the misalignment angle and choice of focal plane. The validated geometric model produces a full operating window that is strongly dependent on the capillary radius and spherical cap height. In all cases, the contribution of optical errors is minimized when the height of the spherical cap is equivalent to the capillary radius, i.e. a hemispherical interface. The understanding of these errors allow for correct measure of interfacial curvature and interfacial tension regardless of experimental setup. For the case of microtensiometry, this greatly decreases the time for experimental setup

  17. Software for Correcting the Dynamic Error of Force Transducers

    Directory of Open Access Journals (Sweden)

    Naoki Miyashita

    2014-07-01

    Full Text Available Software which corrects the dynamic error of force transducers in impact force measurements using their own output signal has been developed. The software corrects the output waveform of the transducers using the output waveform itself, estimates its uncertainty and displays the results. In the experiment, the dynamic error of three transducers of the same model are evaluated using the Levitation Mass Method (LMM, in which the impact forces applied to the transducers are accurately determined as the inertial force of the moving part of the aerostatic linear bearing. The parameters for correcting the dynamic error are determined from the results of one set of impact measurements of one transducer. Then, the validity of the obtained parameters is evaluated using the results of the other sets of measurements of all the three transducers. The uncertainties in the uncorrected force and those in the corrected force are also estimated. If manufacturers determine the correction parameters for each model using the proposed method, and provide the software with the parameters corresponding to each model, then users can obtain the waveform corrected against dynamic error and its uncertainty. The present status and the future prospects of the developed software are discussed in this paper.

  18. Joint Multi-scale Convolution Neural Network for Scene Classification of High Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    ZHENG Zhuo

    2018-05-01

    Full Text Available High resolution remote sensing imagery scene classification is important for automatic complex scene recognition, which is the key technology for military and disaster relief, etc. In this paper, we propose a novel joint multi-scale convolution neural network (JMCNN method using a limited amount of image data for high resolution remote sensing imagery scene classification. Different from traditional convolutional neural network, the proposed JMCNN is an end-to-end training model with joint enhanced high-level feature representation, which includes multi-channel feature extractor, joint multi-scale feature fusion and Softmax classifier. Multi-channel and scale convolutional extractors are used to extract scene middle features, firstly. Then, in order to achieve enhanced high-level feature representation in a limit dataset, joint multi-scale feature fusion is proposed to combine multi-channel and scale features using two feature fusions. Finally, enhanced high-level feature representation can be used for classification by Softmax. Experiments were conducted using two limit public UCM and SIRI datasets. Compared to state-of-the-art methods, the JMCNN achieved improved performance and great robustness with average accuracies of 89.3% and 88.3% on the two datasets.

  19. Spherical convolutions and their application in molecular modelling

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Frellsen, Jes

    2017-01-01

    Convolutional neural networks are increasingly used outside the domain of image analysis, in particular in various areas of the natural sciences concerned with spatial data. Such networks often work out-of-the box, and in some cases entire model architectures from image analysis can be carried over...... to other problem domains almost unaltered. Unfortunately, this convenience does not trivially extend to data in non-euclidean spaces, such as spherical data. In this paper, we introduce two strategies for conducting convolutions on the sphere, using either a spherical-polar grid or a grid based...... of spherical convolutions in the context of molecular modelling, by considering structural environments within proteins. We show that the models are capable of learning non-trivial functions in these molecular environments, and that our spherical convolutions generally outperform standard 3D convolutions...

  20. Analysis of error-correction constraints in an optical disk

    Science.gov (United States)

    Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David

    1996-07-01

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  1. Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes

    International Nuclear Information System (INIS)

    Houshmand, Monireh; Hosseini-Khayat, Saied

    2011-01-01

    Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation and practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.

  2. An Efficient Implementation of Deep Convolutional Neural Networks for MRI Segmentation.

    Science.gov (United States)

    Hoseini, Farnaz; Shahbahrami, Asadollah; Bayat, Peyman

    2018-02-27

    Image segmentation is one of the most common steps in digital image processing, classifying a digital image into different segments. The main goal of this paper is to segment brain tumors in magnetic resonance images (MRI) using deep learning. Tumors having different shapes, sizes, brightness and textures can appear anywhere in the brain. These complexities are the reasons to choose a high-capacity Deep Convolutional Neural Network (DCNN) containing more than one layer. The proposed DCNN contains two parts: architecture and learning algorithms. The architecture and the learning algorithms are used to design a network model and to optimize parameters for the network training phase, respectively. The architecture contains five convolutional layers, all using 3 × 3 kernels, and one fully connected layer. Due to the advantage of using small kernels with fold, it allows making the effect of larger kernels with smaller number of parameters and fewer computations. Using the Dice Similarity Coefficient metric, we report accuracy results on the BRATS 2016, brain tumor segmentation challenge dataset, for the complete, core, and enhancing regions as 0.90, 0.85, and 0.84 respectively. The learning algorithm includes the task-level parallelism. All the pixels of an MR image are classified using a patch-based approach for segmentation. We attain a good performance and the experimental results show that the proposed DCNN increases the segmentation accuracy compared to previous techniques.

  3. Object recognition using deep convolutional neural networks with complete transfer and partial frozen layers

    NARCIS (Netherlands)

    Kruithof, M.C.; Bouma, H.; Fischer, N.M.; Schutte, K.

    2016-01-01

    Object recognition is important to understand the content of video and allow flexible querying in a large number of cameras, especially for security applications. Recent benchmarks show that deep convolutional neural networks are excellent approaches for object recognition. This paper describes an

  4. Deep convolutional neural networks for estimating porous material parameters with ultrasound tomography

    Science.gov (United States)

    Lähivaara, Timo; Kärkkäinen, Leo; Huttunen, Janne M. J.; Hesthaven, Jan S.

    2018-02-01

    We study the feasibility of data based machine learning applied to ultrasound tomography to estimate water-saturated porous material parameters. In this work, the data to train the neural networks is simulated by solving wave propagation in coupled poroviscoelastic-viscoelastic-acoustic media. As the forward model, we consider a high-order discontinuous Galerkin method while deep convolutional neural networks are used to solve the parameter estimation problem. In the numerical experiment, we estimate the material porosity and tortuosity while the remaining parameters which are of less interest are successfully marginalized in the neural networks-based inversion. Computational examples confirms the feasibility and accuracy of this approach.

  5. Spectral-spatial classification of hyperspectral image using three-dimensional convolution network

    Science.gov (United States)

    Liu, Bing; Yu, Xuchu; Zhang, Pengqiang; Tan, Xiong; Wang, Ruirui; Zhi, Lu

    2018-01-01

    Recently, hyperspectral image (HSI) classification has become a focus of research. However, the complex structure of an HSI makes feature extraction difficult to achieve. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. The design of an improved 3-D convolutional neural network (3D-CNN) model for HSI classification is described. This model extracts features from both the spectral and spatial dimensions through the application of 3-D convolutions, thereby capturing the important discrimination information encoded in multiple adjacent bands. The designed model views the HSI cube data altogether without relying on any pre- or postprocessing. In addition, the model is trained in an end-to-end fashion without any handcrafted features. The designed model was applied to three widely used HSI datasets. The experimental results demonstrate that the 3D-CNN-based method outperforms conventional methods even with limited labeled training samples.

  6. A convolutional neural network-based screening tool for X-ray serial crystallography.

    Science.gov (United States)

    Ke, Tsung Wei; Brewster, Aaron S; Yu, Stella X; Ushizima, Daniela; Yang, Chao; Sauter, Nicholas K

    2018-05-01

    A new tool is introduced for screening macromolecular X-ray crystallography diffraction images produced at an X-ray free-electron laser light source. Based on a data-driven deep learning approach, the proposed tool executes a convolutional neural network to detect Bragg spots. Automatic image processing algorithms described can enable the classification of large data sets, acquired under realistic conditions consisting of noisy data with experimental artifacts. Outcomes are compared for different data regimes, including samples from multiple instruments and differing amounts of training data for neural network optimization. open access.

  7. Automatic recognition of holistic functional brain networks using iteratively optimized convolutional neural networks (IO-CNN) with weak label initialization.

    Science.gov (United States)

    Zhao, Yu; Ge, Fangfei; Liu, Tianming

    2018-07-01

    fMRI data decomposition techniques have advanced significantly from shallow models such as Independent Component Analysis (ICA) and Sparse Coding and Dictionary Learning (SCDL) to deep learning models such Deep Belief Networks (DBN) and Convolutional Autoencoder (DCAE). However, interpretations of those decomposed networks are still open questions due to the lack of functional brain atlases, no correspondence across decomposed or reconstructed networks across different subjects, and significant individual variabilities. Recent studies showed that deep learning, especially deep convolutional neural networks (CNN), has extraordinary ability of accommodating spatial object patterns, e.g., our recent works using 3D CNN for fMRI-derived network classifications achieved high accuracy with a remarkable tolerance for mistakenly labelled training brain networks. However, the training data preparation is one of the biggest obstacles in these supervised deep learning models for functional brain network map recognitions, since manual labelling requires tedious and time-consuming labours which will sometimes even introduce label mistakes. Especially for mapping functional networks in large scale datasets such as hundreds of thousands of brain networks used in this paper, the manual labelling method will become almost infeasible. In response, in this work, we tackled both the network recognition and training data labelling tasks by proposing a new iteratively optimized deep learning CNN (IO-CNN) framework with an automatic weak label initialization, which enables the functional brain networks recognition task to a fully automatic large-scale classification procedure. Our extensive experiments based on ABIDE-II 1099 brains' fMRI data showed the great promise of our IO-CNN framework. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Reducing weight precision of convolutional neural networks towards large-scale on-chip image recognition

    Science.gov (United States)

    Ji, Zhengping; Ovsiannikov, Ilia; Wang, Yibing; Shi, Lilong; Zhang, Qiang

    2015-05-01

    In this paper, we develop a server-client quantization scheme to reduce bit resolution of deep learning architecture, i.e., Convolutional Neural Networks, for image recognition tasks. Low bit resolution is an important factor in bringing the deep learning neural network into hardware implementation, which directly determines the cost and power consumption. We aim to reduce the bit resolution of the network without sacrificing its performance. To this end, we design a new quantization algorithm called supervised iterative quantization to reduce the bit resolution of learned network weights. In the training stage, the supervised iterative quantization is conducted via two steps on server - apply k-means based adaptive quantization on learned network weights and retrain the network based on quantized weights. These two steps are alternated until the convergence criterion is met. In this testing stage, the network configuration and low-bit weights are loaded to the client hardware device to recognize coming input in real time, where optimized but expensive quantization becomes infeasible. Considering this, we adopt a uniform quantization for the inputs and internal network responses (called feature maps) to maintain low on-chip expenses. The Convolutional Neural Network with reduced weight and input/response precision is demonstrated in recognizing two types of images: one is hand-written digit images and the other is real-life images in office scenarios. Both results show that the new network is able to achieve the performance of the neural network with full bit resolution, even though in the new network the bit resolution of both weight and input are significantly reduced, e.g., from 64 bits to 4-5 bits.

  9. Chromatin accessibility prediction via convolutional long short-term memory networks with k-mer embedding.

    Science.gov (United States)

    Min, Xu; Zeng, Wanwen; Chen, Ning; Chen, Ting; Jiang, Rui

    2017-07-15

    Experimental techniques for measuring chromatin accessibility are expensive and time consuming, appealing for the development of computational approaches to predict open chromatin regions from DNA sequences. Along this direction, existing methods fall into two classes: one based on handcrafted k -mer features and the other based on convolutional neural networks. Although both categories have shown good performance in specific applications thus far, there still lacks a comprehensive framework to integrate useful k -mer co-occurrence information with recent advances in deep learning. We fill this gap by addressing the problem of chromatin accessibility prediction with a convolutional Long Short-Term Memory (LSTM) network with k -mer embedding. We first split DNA sequences into k -mers and pre-train k -mer embedding vectors based on the co-occurrence matrix of k -mers by using an unsupervised representation learning approach. We then construct a supervised deep learning architecture comprised of an embedding layer, three convolutional layers and a Bidirectional LSTM (BLSTM) layer for feature learning and classification. We demonstrate that our method gains high-quality fixed-length features from variable-length sequences and consistently outperforms baseline methods. We show that k -mer embedding can effectively enhance model performance by exploring different embedding strategies. We also prove the efficacy of both the convolution and the BLSTM layers by comparing two variations of the network architecture. We confirm the robustness of our model to hyper-parameters by performing sensitivity analysis. We hope our method can eventually reinforce our understanding of employing deep learning in genomic studies and shed light on research regarding mechanisms of chromatin accessibility. The source code can be downloaded from https://github.com/minxueric/ismb2017_lstm . tingchen@tsinghua.edu.cn or ruijiang@tsinghua.edu.cn. Supplementary materials are available at

  10. Fully convolutional neural networks improve abdominal organ segmentation

    Science.gov (United States)

    Bobo, Meg F.; Bao, Shunxing; Huo, Yuankai; Yao, Yuang; Virostko, Jack; Plassard, Andrew J.; Lyu, Ilwoo; Assad, Albert; Abramson, Richard G.; Hilmes, Melissa A.; Landman, Bennett A.

    2018-03-01

    Abdominal image segmentation is a challenging, yet important clinical problem. Variations in body size, position, and relative organ positions greatly complicate the segmentation process. Historically, multi-atlas methods have achieved leading results across imaging modalities and anatomical targets. However, deep learning is rapidly overtaking classical approaches for image segmentation. Recently, Zhou et al. showed that fully convolutional networks produce excellent results in abdominal organ segmentation of computed tomography (CT) scans. Yet, deep learning approaches have not been applied to whole abdomen magnetic resonance imaging (MRI) segmentation. Herein, we evaluate the applicability of an existing fully convolutional neural network (FCNN) designed for CT imaging to segment abdominal organs on T2 weighted (T2w) MRI's with two examples. In the primary example, we compare a classical multi-atlas approach with FCNN on forty-five T2w MRI's acquired from splenomegaly patients with five organs labeled (liver, spleen, left kidney, right kidney, and stomach). Thirty-six images were used for training while nine were used for testing. The FCNN resulted in a Dice similarity coefficient (DSC) of 0.930 in spleens, 0.730 in left kidneys, 0.780 in right kidneys, 0.913 in livers, and 0.556 in stomachs. The performance measures for livers, spleens, right kidneys, and stomachs were significantly better than multi-atlas (p < 0.05, Wilcoxon rank-sum test). In a secondary example, we compare the multi-atlas approach with FCNN on 138 distinct T2w MRI's with manually labeled pancreases (one label). On the pancreas dataset, the FCNN resulted in a median DSC of 0.691 in pancreases versus 0.287 for multi-atlas. The results are highly promising given relatively limited training data and without specific training of the FCNN model and illustrate the potential of deep learning approaches to transcend imaging modalities. 1

  11. PIV-DCNN: cascaded deep convolutional neural networks for particle image velocimetry

    Science.gov (United States)

    Lee, Yong; Yang, Hua; Yin, Zhouping

    2017-12-01

    Velocity estimation (extracting the displacement vector information) from the particle image pairs is of critical importance for particle image velocimetry. This problem is mostly transformed into finding the sub-pixel peak in a correlation map. To address the original displacement extraction problem, we propose a different evaluation scheme (PIV-DCNN) with four-level regression deep convolutional neural networks. At each level, the networks are trained to predict a vector from two input image patches. The low-level network is skilled at large displacement estimation and the high- level networks are devoted to improving the accuracy. Outlier replacement and symmetric window offset operation glue the well- functioning networks in a cascaded manner. Through comparison with the standard PIV methods (one-pass cross-correlation method, three-pass window deformation), the practicability of the proposed PIV-DCNN is verified by the application to a diversity of synthetic and experimental PIV images.

  12. Digital Tomosynthesis System Geometry Analysis Using Convolution-Based Blur-and-Add (BAA) Model.

    Science.gov (United States)

    Wu, Meng; Yoon, Sungwon; Solomon, Edward G; Star-Lack, Josh; Pelc, Norbert; Fahrig, Rebecca

    2016-01-01

    Digital tomosynthesis is a three-dimensional imaging technique with a lower radiation dose than computed tomography (CT). Due to the missing data in tomosynthesis systems, out-of-plane structures in the depth direction cannot be completely removed by the reconstruction algorithms. In this work, we analyzed the impulse responses of common tomosynthesis systems on a plane-to-plane basis and proposed a fast and accurate convolution-based blur-and-add (BAA) model to simulate the backprojected images. In addition, the analysis formalism describing the impulse response of out-of-plane structures can be generalized to both rotating and parallel gantries. We implemented a ray tracing forward projection and backprojection (ray-based model) algorithm and the convolution-based BAA model to simulate the shift-and-add (backproject) tomosynthesis reconstructions. The convolution-based BAA model with proper geometry distortion correction provides reasonably accurate estimates of the tomosynthesis reconstruction. A numerical comparison indicates that the simulated images using the two models differ by less than 6% in terms of the root-mean-squared error. This convolution-based BAA model can be used in efficient system geometry analysis, reconstruction algorithm design, out-of-plane artifacts suppression, and CT-tomosynthesis registration.

  13. Autonomous Quantum Error Correction with Application to Quantum Metrology

    Science.gov (United States)

    Reiter, Florentin; Sorensen, Anders S.; Zoller, Peter; Muschik, Christine A.

    2017-04-01

    We present a quantum error correction scheme that stabilizes a qubit by coupling it to an engineered environment which protects it against spin- or phase flips. Our scheme uses always-on couplings that run continuously in time and operates in a fully autonomous fashion without the need to perform measurements or feedback operations on the system. The correction of errors takes place entirely at the microscopic level through a build-in feedback mechanism. Our dissipative error correction scheme can be implemented in a system of trapped ions and can be used for improving high precision sensing. We show that the enhanced coherence time that results from the coupling to the engineered environment translates into a significantly enhanced precision for measuring weak fields. In a broader context, this work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.

  14. NP-hardness of decoding quantum error-correction codes

    Science.gov (United States)

    Hsieh, Min-Hsiu; Le Gall, François

    2011-05-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  15. NP-hardness of decoding quantum error-correction codes

    International Nuclear Information System (INIS)

    Hsieh, Min-Hsiu; Le Gall, Francois

    2011-01-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  16. Static facial expression recognition with convolution neural networks

    Science.gov (United States)

    Zhang, Feng; Chen, Zhong; Ouyang, Chao; Zhang, Yifei

    2018-03-01

    Facial expression recognition is a currently active research topic in the fields of computer vision, pattern recognition and artificial intelligence. In this paper, we have developed a convolutional neural networks (CNN) for classifying human emotions from static facial expression into one of the seven facial emotion categories. We pre-train our CNN model on the combined FER2013 dataset formed by train, validation and test set and fine-tune on the extended Cohn-Kanade database. In order to reduce the overfitting of the models, we utilized different techniques including dropout and batch normalization in addition to data augmentation. According to the experimental result, our CNN model has excellent classification performance and robustness for facial expression recognition.

  17. An Ensemble of 2D Convolutional Neural Networks for Tumor Segmentation

    DEFF Research Database (Denmark)

    Lyksborg, Mark; Puonti, Oula; Agn, Mikael

    2015-01-01

    Accurate tumor segmentation plays an important role in radiosurgery planning and the assessment of radiotherapy treatment efficacy. In this paper we propose a method combining an ensemble of 2D convolutional neural networks for doing a volumetric segmentation of magnetic resonance images....... The segmentation is done in three steps; first the full tumor region, is segmented from the background by a voxel-wise merging of the decisions of three networks learned from three orthogonal planes, next the segmentation is refined using a cellular automaton-based seed growing method known as growcut. Finally......, within-tumor sub-regions are segmented using an additional ensemble of networks trained for the task. We demonstrate the method on the MICCAI Brain Tumor Segmentation Challenge dataset of 2014, and show improved segmentation accuracy compared to an axially trained 2D network and an ensemble segmentation...

  18. Neonatal Seizure Detection Using Deep Convolutional Neural Networks.

    Science.gov (United States)

    Ansari, Amir H; Cherian, Perumpillichira J; Caicedo, Alexander; Naulaers, Gunnar; De Vos, Maarten; Van Huffel, Sabine

    2018-04-02

    Identifying a core set of features is one of the most important steps in the development of an automated seizure detector. In most of the published studies describing features and seizure classifiers, the features were hand-engineered, which may not be optimal. The main goal of the present paper is using deep convolutional neural networks (CNNs) and random forest to automatically optimize feature selection and classification. The input of the proposed classifier is raw multi-channel EEG and the output is the class label: seizure/nonseizure. By training this network, the required features are optimized, while fitting a nonlinear classifier on the features. After training the network with EEG recordings of 26 neonates, five end layers performing the classification were replaced with a random forest classifier in order to improve the performance. This resulted in a false alarm rate of 0.9 per hour and seizure detection rate of 77% using a test set of EEG recordings of 22 neonates that also included dubious seizures. The newly proposed CNN classifier outperformed three data-driven feature-based approaches and performed similar to a previously developed heuristic method.

  19. Multi-focus image fusion with the all convolutional neural network

    Science.gov (United States)

    Du, Chao-ben; Gao, She-sheng

    2018-01-01

    A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image fusion effect, getting a decision map is very necessary and usually difficult to finish. In this letter, we address this problem with convolutional neural network (CNN), aiming to get a state-of-the-art decision map. The main idea is that the max-pooling of CNN is replaced by a convolution layer, the residuals are propagated backwards by gradient descent, and the training parameters of the individual layers of the CNN are updated layer by layer. Based on this, we propose a new all CNN (ACNN)-based multi-focus image fusion method in spatial domain. We demonstrate that the decision map obtained from the ACNN is reliable and can lead to high-quality fusion results. Experimental results clearly validate that the proposed algorithm can obtain state-of-the-art fusion performance in terms of both qualitative and quantitative evaluations.

  20. Deep convolutional neural network based antenna selection in multiple-input multiple-output system

    Science.gov (United States)

    Cai, Jiaxin; Li, Yan; Hu, Ying

    2018-03-01

    Antenna selection of wireless communication system has attracted increasing attention due to the challenge of keeping a balance between communication performance and computational complexity in large-scale Multiple-Input MultipleOutput antenna systems. Recently, deep learning based methods have achieved promising performance for large-scale data processing and analysis in many application fields. This paper is the first attempt to introduce the deep learning technique into the field of Multiple-Input Multiple-Output antenna selection in wireless communications. First, the label of attenuation coefficients channel matrix is generated by minimizing the key performance indicator of training antenna systems. Then, a deep convolutional neural network that explicitly exploits the massive latent cues of attenuation coefficients is learned on the training antenna systems. Finally, we use the adopted deep convolutional neural network to classify the channel matrix labels of test antennas and select the optimal antenna subset. Simulation experimental results demonstrate that our method can achieve better performance than the state-of-the-art baselines for data-driven based wireless antenna selection.

  1. CNNdel: Calling Structural Variations on Low Coverage Data Based on Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Jing Wang

    2017-01-01

    Full Text Available Many structural variations (SVs detection methods have been proposed due to the popularization of next-generation sequencing (NGS. These SV calling methods use different SV-property-dependent features; however, they all suffer from poor accuracy when running on low coverage sequences. The union of results from these tools achieves fairly high sensitivity but still produces low accuracy on low coverage sequence data. That is, these methods contain many false positives. In this paper, we present CNNdel, an approach for calling deletions from paired-end reads. CNNdel gathers SV candidates reported by multiple tools and then extracts features from aligned BAM files at the positions of candidates. With labeled feature-expressed candidates as a training set, CNNdel trains convolutional neural networks (CNNs to distinguish true unlabeled candidates from false ones. Results show that CNNdel works well with NGS reads from 26 low coverage genomes of the 1000 Genomes Project. The paper demonstrates that convolutional neural networks can automatically assign the priority of SV features and reduce the false positives efficaciously.

  2. Deep Convolutional Networks for Event Reconstruction and Particle Tagging on NOvA and DUNE

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Deep Convolutional Neural Networks (CNNs) have been widely applied in computer vision to solve complex problems in image recognition and analysis. In recent years many efforts have emerged to extend the use of this technology to HEP applications, including the Convolutional Visual Network (CVN), our implementation for identification of neutrino events. In this presentation I will describe the core concepts of CNNs, the details of our particular implementation in the Caffe framework and our application to identify NOvA events. NOvA is a long baseline neutrino experiment whose main goal is the measurement of neutrino oscillations. This relies on the accurate identification and reconstruction of the neutrino flavor in the interactions we observe. In 2016 the NOvA experiment released results for the observation of oscillations in the ν μ → ν e channel, the first HEP result employing CNNs. I will also discuss our approach at event identification on NOvA as well as recent developments in the application of CNN...

  3. Retrieval of Sentence Sequences for an Image Stream via Coherence Recurrent Convolutional Networks.

    Science.gov (United States)

    Park, Cesc Chunseong; Kim, Youngjin; Kim, Gunhee

    2018-04-01

    We propose an approach for retrieving a sequence of natural sentences for an image stream. Since general users often take a series of pictures on their experiences, much online visual information exists in the form of image streams, for which it would better take into consideration of the whole image stream to produce natural language descriptions. While almost all previous studies have dealt with the relation between a single image and a single natural sentence, our work extends both input and output dimension to a sequence of images and a sequence of sentences. For retrieving a coherent flow of multiple sentences for a photo stream, we propose a multimodal neural architecture called coherence recurrent convolutional network (CRCN), which consists of convolutional neural networks, bidirectional long short-term memory (LSTM) networks, and an entity-based local coherence model. Our approach directly learns from vast user-generated resource of blog posts as text-image parallel training data. We collect more than 22 K unique blog posts with 170 K associated images for the travel topics of NYC, Disneyland , Australia, and Hawaii. We demonstrate that our approach outperforms other state-of-the-art image captioning methods for text sequence generation, using both quantitative measures and user studies via Amazon Mechanical Turk.

  4. Radar Rainfall Bias Correction based on Deep Learning Approach

    Science.gov (United States)

    Song, Yang; Han, Dawei; Rico-Ramirez, Miguel A.

    2017-04-01

    Radar rainfall measurement errors can be considerably attributed to various sources including intricate synoptic regimes. Temperature, humidity and wind are typically acknowledged as critical meteorological factors in inducing the precipitation discrepancies aloft and on the ground. The conventional practices mainly use the radar-gauge or geostatistical techniques by direct weighted interpolation algorithms as bias correction schemes whereas rarely consider the atmospheric effects. This study aims to comprehensively quantify those meteorological elements' impacts on radar-gauge rainfall bias correction based on a deep learning approach. The deep learning approach employs deep convolutional neural networks to automatically extract three-dimensional meteorological features for target recognition based on high range resolution profiles. The complex nonlinear relationships between input and target variables can be implicitly detected by such a scheme, which is validated on the test dataset. The proposed bias correction scheme is expected to be a promising improvement in systematically minimizing the synthesized atmospheric effects on rainfall discrepancies between radar and rain gauges, which can be useful in many meteorological and hydrological applications (e.g., real-time flood forecasting) especially for regions with complex atmospheric conditions.

  5. Dealiased convolutions for pseudospectral simulations

    International Nuclear Information System (INIS)

    Roberts, Malcolm; Bowman, John C

    2011-01-01

    Efficient algorithms have recently been developed for calculating dealiased linear convolution sums without the expense of conventional zero-padding or phase-shift techniques. For one-dimensional in-place convolutions, the memory requirements are identical with the zero-padding technique, with the important distinction that the additional work memory need not be contiguous with the input data. This decoupling of data and work arrays dramatically reduces the memory and computation time required to evaluate higher-dimensional in-place convolutions. The memory savings is achieved by computing the in-place Fourier transform of the data in blocks, rather than all at once. The technique also allows one to dealias the n-ary convolutions that arise on Fourier transforming cubic and higher powers. Implicitly dealiased convolutions can be built on top of state-of-the-art adaptive fast Fourier transform libraries like FFTW. Vectorized multidimensional implementations for the complex and centered Hermitian (pseudospectral) cases have already been implemented in the open-source software FFTW++. With the advent of this library, writing a high-performance dealiased pseudospectral code for solving nonlinear partial differential equations has now become a relatively straightforward exercise. New theoretical estimates of computational complexity and memory use are provided, including corrected timing results for 3D pruned convolutions and further consideration of higher-order convolutions.

  6. Learning Traffic as Images: A Deep Convolutional Neural Network for Large-Scale Transportation Network Speed Prediction.

    Science.gov (United States)

    Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng

    2017-04-10

    This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.

  7. A light and faster regional convolutional neural network for object detection in optical remote sensing images

    Science.gov (United States)

    Ding, Peng; Zhang, Ye; Deng, Wei-Jian; Jia, Ping; Kuijper, Arjan

    2018-07-01

    Detection of objects from satellite optical remote sensing images is very important for many commercial and governmental applications. With the development of deep convolutional neural networks (deep CNNs), the field of object detection has seen tremendous advances. Currently, objects in satellite remote sensing images can be detected using deep CNNs. In general, optical remote sensing images contain many dense and small objects, and the use of the original Faster Regional CNN framework does not yield a suitably high precision. Therefore, after careful analysis we adopt dense convoluted networks, a multi-scale representation and various combinations of improvement schemes to enhance the structure of the base VGG16-Net for improving the precision. We propose an approach to reduce the test-time (detection time) and memory requirements. To validate the effectiveness of our approach, we perform experiments using satellite remote sensing image datasets of aircraft and automobiles. The results show that the improved network structure can detect objects in satellite optical remote sensing images more accurately and efficiently.

  8. Error correcting circuit design with carbon nanotube field effect transistors

    Science.gov (United States)

    Liu, Xiaoqiang; Cai, Li; Yang, Xiaokuo; Liu, Baojun; Liu, Zhongyong

    2018-03-01

    In this work, a parallel error correcting circuit based on (7, 4) Hamming code is designed and implemented with carbon nanotube field effect transistors, and its function is validated by simulation in HSpice with the Stanford model. A grouping method which is able to correct multiple bit errors in 16-bit and 32-bit application is proposed, and its error correction capability is analyzed. Performance of circuits implemented with CNTFETs and traditional MOSFETs respectively is also compared, and the former shows a 34.4% decrement of layout area and a 56.9% decrement of power consumption.

  9. Classifying medical relations in clinical text via convolutional neural networks.

    Science.gov (United States)

    He, Bin; Guan, Yi; Dai, Rui

    2018-05-16

    Deep learning research on relation classification has achieved solid performance in the general domain. This study proposes a convolutional neural network (CNN) architecture with a multi-pooling operation for medical relation classification on clinical records and explores a loss function with a category-level constraint matrix. Experiments using the 2010 i2b2/VA relation corpus demonstrate these models, which do not depend on any external features, outperform previous single-model methods and our best model is competitive with the existing ensemble-based method. Copyright © 2018. Published by Elsevier B.V.

  10. TopologyNet: Topology based deep convolutional and multi-task neural networks for biomolecular property predictions

    Science.gov (United States)

    2017-01-01

    Although deep learning approaches have had tremendous success in image, video and audio processing, computer vision, and speech recognition, their applications to three-dimensional (3D) biomolecular structural data sets have been hindered by the geometric and biological complexity. To address this problem we introduce the element-specific persistent homology (ESPH) method. ESPH represents 3D complex geometry by one-dimensional (1D) topological invariants and retains important biological information via a multichannel image-like representation. This representation reveals hidden structure-function relationships in biomolecules. We further integrate ESPH and deep convolutional neural networks to construct a multichannel topological neural network (TopologyNet) for the predictions of protein-ligand binding affinities and protein stability changes upon mutation. To overcome the deep learning limitations from small and noisy training sets, we propose a multi-task multichannel topological convolutional neural network (MM-TCNN). We demonstrate that TopologyNet outperforms the latest methods in the prediction of protein-ligand binding affinities, mutation induced globular protein folding free energy changes, and mutation induced membrane protein folding free energy changes. Availability: weilab.math.msu.edu/TDL/ PMID:28749969

  11. Electroencephalography Based Fusion Two-Dimensional (2D-Convolution Neural Networks (CNN Model for Emotion Recognition System

    Directory of Open Access Journals (Sweden)

    Yea-Hoon Kwon

    2018-04-01

    Full Text Available The purpose of this study is to improve human emotional classification accuracy using a convolution neural networks (CNN model and to suggest an overall method to classify emotion based on multimodal data. We improved classification performance by combining electroencephalogram (EEG and galvanic skin response (GSR signals. GSR signals are preprocessed using by the zero-crossing rate. Sufficient EEG feature extraction can be obtained through CNN. Therefore, we propose a suitable CNN model for feature extraction by tuning hyper parameters in convolution filters. The EEG signal is preprocessed prior to convolution by a wavelet transform while considering time and frequency simultaneously. We use a database for emotion analysis using the physiological signals open dataset to verify the proposed process, achieving 73.4% accuracy, showing significant performance improvement over the current best practice models.

  12. Convolutive ICA for Spatio-Temporal Analysis of EEG

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai

    2007-01-01

    in the convolutive model can be correctly detected using Bayesian model selection. We demonstrate a framework for deconvolving an EEG ICA subspace. Initial results suggest that in some cases convolutive mixing may be a more realistic model for EEG signals than the instantaneous ICA model....

  13. Error-correcting pairs for a public-key cryptosystem

    International Nuclear Information System (INIS)

    Pellikaan, Ruud; Márquez-Corbella, Irene

    2017-01-01

    Code-based Cryptography (CBC) is a powerful and promising alternative for quantum resistant cryptography. Indeed, together with lattice-based cryptography, multivariate cryptography and hash-based cryptography are the principal available techniques for post-quantum cryptography. CBC was first introduced by McEliece where he designed one of the most efficient Public-Key encryption schemes with exceptionally strong security guarantees and other desirable properties that still resist to attacks based on Quantum Fourier Transform and Amplitude Amplification. The original proposal, which remains unbroken, was based on binary Goppa codes. Later, several families of codes have been proposed in order to reduce the key size. Some of these alternatives have already been broken. One of the main requirements of a code-based cryptosystem is having high performance t -bounded decoding algorithms which is achieved in the case the code has a t -error-correcting pair (ECP). Indeed, those McEliece schemes that use GRS codes, BCH, Goppa and algebraic geometry codes are in fact using an error-correcting pair as a secret key. That is, the security of these Public-Key Cryptosystems is not only based on the inherent intractability of bounded distance decoding but also on the assumption that it is difficult to retrieve efficiently an error-correcting pair. In this paper, the class of codes with a t -ECP is proposed for the McEliece cryptosystem. Moreover, we study the hardness of distinguishing arbitrary codes from those having a t -error correcting pair. (paper)

  14. Artificial neural network implementation of a near-ideal error prediction controller

    Science.gov (United States)

    Mcvey, Eugene S.; Taylor, Lynore Denise

    1992-01-01

    A theory has been developed at the University of Virginia which explains the effects of including an ideal predictor in the forward loop of a linear error-sampled system. It has been shown that the presence of this ideal predictor tends to stabilize the class of systems considered. A prediction controller is merely a system which anticipates a signal or part of a signal before it actually occurs. It is understood that an exact prediction controller is physically unrealizable. However, in systems where the input tends to be repetitive or limited, (i.e., not random) near ideal prediction is possible. In order for the controller to act as a stability compensator, the predictor must be designed in a way that allows it to learn the expected error response of the system. In this way, an unstable system will become stable by including the predicted error in the system transfer function. Previous and current prediction controller include pattern recognition developments and fast-time simulation which are applicable to the analysis of linear sampled data type systems. The use of pattern recognition techniques, along with a template matching scheme, has been proposed as one realizable type of near-ideal prediction. Since many, if not most, systems are repeatedly subjected to similar inputs, it was proposed that an adaptive mechanism be used to 'learn' the correct predicted error response. Once the system has learned the response of all the expected inputs, it is necessary only to recognize the type of input with a template matching mechanism and then to use the correct predicted error to drive the system. Suggested here is an alternate approach to the realization of a near-ideal error prediction controller, one designed using Neural Networks. Neural Networks are good at recognizing patterns such as system responses, and the back-propagation architecture makes use of a template matching scheme. In using this type of error prediction, it is assumed that the system error

  15. Position Error Covariance Matrix Validation and Correction

    Science.gov (United States)

    Frisbee, Joe, Jr.

    2016-01-01

    In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.

  16. Interactive Video Coding and Transmission over Heterogeneous Wired-to-Wireless IP Networks Using an Edge Proxy

    Directory of Open Access Journals (Sweden)

    Modestino James W

    2004-01-01

    Full Text Available Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC coding scheme employing Reed-Solomon (RS codes and rate-compatible punctured convolutional (RCPC codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.

  17. A convolution method for predicting mean treatment dose including organ motion at imaging

    International Nuclear Information System (INIS)

    Booth, J.T.; Zavgorodni, S.F.; Royal Adelaide Hospital, SA

    2000-01-01

    Full text: The random treatment delivery errors (organ motion and set-up error) can be incorporated into the treatment planning software using a convolution method. Mean treatment dose is computed as the convolution of a static dose distribution with a variation kernel. Typically this variation kernel is Gaussian with variance equal to the sum of the organ motion and set-up error variances. We propose a novel variation kernel for the convolution technique that additionally considers the position of the mobile organ in the planning CT image. The systematic error of organ position in the planning CT image can be considered random for each patient over a population. Thus the variance of the variation kernel will equal the sum of treatment delivery variance and organ motion variance at planning for the population of treatments. The kernel is extended to deal with multiple pre-treatment CT scans to improve tumour localisation for planning. Mean treatment doses calculated with the convolution technique are compared to benchmark Monte Carlo (MC) computations. Calculations of mean treatment dose using the convolution technique agreed with MC results for all cases to better than ± 1 Gy in the planning treatment volume for a prescribed 60 Gy treatment. Convolution provides a quick method of incorporating random organ motion (captured in the planning CT image and during treatment delivery) and random set-up errors directly into the dose distribution. Copyright (2000) Australasian College of Physical Scientists and Engineers in Medicine

  18. Deep convolutional neural network for the automated detection and diagnosis of seizure using EEG signals.

    Science.gov (United States)

    Acharya, U Rajendra; Oh, Shu Lih; Hagiwara, Yuki; Tan, Jen Hong; Adeli, Hojjat

    2017-09-27

    An encephalogram (EEG) is a commonly used ancillary test to aide in the diagnosis of epilepsy. The EEG signal contains information about the electrical activity of the brain. Traditionally, neurologists employ direct visual inspection to identify epileptiform abnormalities. This technique can be time-consuming, limited by technical artifact, provides variable results secondary to reader expertise level, and is limited in identifying abnormalities. Therefore, it is essential to develop a computer-aided diagnosis (CAD) system to automatically distinguish the class of these EEG signals using machine learning techniques. This is the first study to employ the convolutional neural network (CNN) for analysis of EEG signals. In this work, a 13-layer deep convolutional neural network (CNN) algorithm is implemented to detect normal, preictal, and seizure classes. The proposed technique achieved an accuracy, specificity, and sensitivity of 88.67%, 90.00% and 95.00%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer

    OpenAIRE

    Zagoruyko, Sergey; Komodakis, Nikos

    2016-01-01

    Attention plays a critical role in human visual experience. Furthermore, it has recently been demonstrated that attention can also play an important role in the context of applying artificial neural networks to a variety of tasks from fields such as computer vision and NLP. In this work we show that, by properly defining attention for convolutional neural networks, we can actually use this type of information in order to significantly improve the performance of a student CNN network by forcin...

  20. Quantum Error Correction and Fault Tolerant Quantum Computing

    CERN Document Server

    Gaitan, Frank

    2008-01-01

    It was once widely believed that quantum computation would never become a reality. However, the discovery of quantum error correction and the proof of the accuracy threshold theorem nearly ten years ago gave rise to extensive development and research aimed at creating a working, scalable quantum computer. Over a decade has passed since this monumental accomplishment yet no book-length pedagogical presentation of this important theory exists. Quantum Error Correction and Fault Tolerant Quantum Computing offers the first full-length exposition on the realization of a theory once thought impo

  1. Pre-trained convolutional neural networks as feature extractors for tuberculosis detection.

    Science.gov (United States)

    Lopes, U K; Valiati, J F

    2017-10-01

    It is estimated that in 2015, approximately 1.8 million people infected by tuberculosis died, most of them in developing countries. Many of those deaths could have been prevented if the disease had been detected at an earlier stage, but the most advanced diagnosis methods are still cost prohibitive for mass adoption. One of the most popular tuberculosis diagnosis methods is the analysis of frontal thoracic radiographs; however, the impact of this method is diminished by the need for individual analysis of each radiography by properly trained radiologists. Significant research can be found on automating diagnosis by applying computational techniques to medical images, thereby eliminating the need for individual image analysis and greatly diminishing overall costs. In addition, recent improvements on deep learning accomplished excellent results classifying images on diverse domains, but its application for tuberculosis diagnosis remains limited. Thus, the focus of this work is to produce an investigation that will advance the research in the area, presenting three proposals to the application of pre-trained convolutional neural networks as feature extractors to detect the disease. The proposals presented in this work are implemented and compared to the current literature. The obtained results are competitive with published works demonstrating the potential of pre-trained convolutional networks as medical image feature extractors. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Supervised local error estimation for nonlinear image registration using convolutional neural networks

    NARCIS (Netherlands)

    Eppenhof, Koen A.J.; Pluim, Josien P.W.; Styner, M.A.; Angelini, E.D.

    2017-01-01

    Error estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation

  3. Opportunistic error correction for mimo-ofdm: from theory to practice

    NARCIS (Netherlands)

    Shao, X.; Slump, Cornelis H.

    Opportunistic error correction based on fountain codes is especially designed for the MIMOOFDM system. The key point of this new method is the tradeoff between the code rate of error correcting codes and the number of sub-carriers in the channel vector to be discarded. By transmitting one

  4. Transfer Learning for Video Recognition with Scarce Training Data for Deep Convolutional Neural Network

    OpenAIRE

    Su, Yu-Chuan; Chiu, Tzu-Hsuan; Yeh, Chun-Yen; Huang, Hsin-Fu; Hsu, Winston H.

    2014-01-01

    Unconstrained video recognition and Deep Convolution Network (DCN) are two active topics in computer vision recently. In this work, we apply DCNs as frame-based recognizers for video recognition. Our preliminary studies, however, show that video corpora with complete ground truth are usually not large and diverse enough to learn a robust model. The networks trained directly on the video data set suffer from significant overfitting and have poor recognition rate on the test set. The same lack-...

  5. Deep Convolutional Generative Adversarial Network for Procedural 3D Landscape Generation Based on DEM

    OpenAIRE

    Wulff-Jensen, Andreas; Rant, Niclas Nerup; Møller, Tobias Nordvig; Billeskov, Jonas Aksel

    2018-01-01

    This paper proposes a novel framework for improving procedural generation of 3D landscapes using machine learning. We utilized a Deep Convolutional Generative Adversarial Network (DC-GAN) to generate heightmaps. The network was trained on a dataset consisting of Digital Elevation Maps (DEM) of the alps. During map generation, the batch size and learning rate were optimized for the most efficient and satisfying map production. The diversity of the final output was tested against Perlin noise u...

  6. Fluid region segmentation in OCT images based on convolution neural network

    Science.gov (United States)

    Liu, Dong; Liu, Xiaoming; Fu, Tianyu; Yang, Zhou

    2017-07-01

    In the retinal image, characteristics of fluid have great significance for diagnosis in eye disease. In the clinical, the segmentation of fluid is usually conducted manually, but is time-consuming and the accuracy is highly depend on the expert's experience. In this paper, we proposed a segmentation method based on convolution neural network (CNN) for segmenting the fluid from fundus image. The B-scans of OCT are segmented into layers, and patches from specific region with annotation are used for training. After the data set being divided into training set and test set, network training is performed and a good segmentation result is obtained, which has a significant advantage over traditional methods such as threshold method.

  7. The Relevance of Second Language Acquisition Theory to the Written Error Correction Debate

    Science.gov (United States)

    Polio, Charlene

    2012-01-01

    The controversies surrounding written error correction can be traced to Truscott (1996) in his polemic against written error correction. He claimed that empirical studies showed that error correction was ineffective and that this was to be expected "given the nature of the correction process and "the nature of language learning" (p. 328, emphasis…

  8. White blood cells identification system based on convolutional deep neural learning networks.

    Science.gov (United States)

    Shahin, A I; Guo, Yanhui; Amin, K M; Sharawi, Amr A

    2017-11-16

    White blood cells (WBCs) differential counting yields valued information about human health and disease. The current developed automated cell morphology equipments perform differential count which is based on blood smear image analysis. Previous identification systems for WBCs consist of successive dependent stages; pre-processing, segmentation, feature extraction, feature selection, and classification. There is a real need to employ deep learning methodologies so that the performance of previous WBCs identification systems can be increased. Classifying small limited datasets through deep learning systems is a major challenge and should be investigated. In this paper, we propose a novel identification system for WBCs based on deep convolutional neural networks. Two methodologies based on transfer learning are followed: transfer learning based on deep activation features and fine-tuning of existed deep networks. Deep acrivation featues are extracted from several pre-trained networks and employed in a traditional identification system. Moreover, a novel end-to-end convolutional deep architecture called "WBCsNet" is proposed and built from scratch. Finally, a limited balanced WBCs dataset classification is performed through the WBCsNet as a pre-trained network. During our experiments, three different public WBCs datasets (2551 images) have been used which contain 5 healthy WBCs types. The overall system accuracy achieved by the proposed WBCsNet is (96.1%) which is more than different transfer learning approaches or even the previous traditional identification system. We also present features visualization for the WBCsNet activation which reflects higher response than the pre-trained activated one. a novel WBCs identification system based on deep learning theory is proposed and a high performance WBCsNet can be employed as a pre-trained network. Copyright © 2017. Published by Elsevier B.V.

  9. Appropriateness of Dropout Layers and Allocation of Their 0.5 Rates across Convolutional Neural Networks for CIFAR-10, EEACL26, and NORB Datasets

    Directory of Open Access Journals (Sweden)

    Romanuke Vadim V.

    2017-12-01

    Full Text Available A technique of DropOut for preventing overfitting of convolutional neural networks for image classification is considered in the paper. The goal is to find a rule of rationally allocating DropOut layers of 0.5 rate to maximise performance. To achieve the goal, two common network architectures are used having either 4 or 5 convolutional layers. Benchmarking is fulfilled with CIFAR-10, EEACL26, and NORB datasets. Initially, series of all admissible versions for allocation of DropOut layers are generated. After the performance against the series is evaluated, normalized and averaged, the compromising rule is found. It consists in non-compactly inserting a few DropOut layers before the last convolutional layer. It is likely that the scheme with two or more DropOut layers fits networks of many convolutional layers for image classification problems with a plenty of features. Such a scheme shall also fit simple datasets prone to overfitting. In fact, the rule “prefers” a fewer number of DropOut layers. The exemplary gain of the rule application is roughly between 10 % and 50 %.

  10. Highly accurate fluorogenic DNA sequencing with information theory-based error correction.

    Science.gov (United States)

    Chen, Zitian; Zhou, Wenxiong; Qiao, Shuo; Kang, Li; Duan, Haifeng; Xie, X Sunney; Huang, Yanyi

    2017-12-01

    Eliminating errors in next-generation DNA sequencing has proved challenging. Here we present error-correction code (ECC) sequencing, a method to greatly improve sequencing accuracy by combining fluorogenic sequencing-by-synthesis (SBS) with an information theory-based error-correction algorithm. ECC embeds redundancy in sequencing reads by creating three orthogonal degenerate sequences, generated by alternate dual-base reactions. This is similar to encoding and decoding strategies that have proved effective in detecting and correcting errors in information communication and storage. We show that, when combined with a fluorogenic SBS chemistry with raw accuracy of 98.1%, ECC sequencing provides single-end, error-free sequences up to 200 bp. ECC approaches should enable accurate identification of extremely rare genomic variations in various applications in biology and medicine.

  11. Auto-Context Convolutional Neural Network (Auto-Net) for Brain Extraction in Magnetic Resonance Imaging.

    Science.gov (United States)

    Mohseni Salehi, Seyed Sadegh; Erdogmus, Deniz; Gholipour, Ali

    2017-11-01

    Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and the robustness of brain extraction, therefore, are crucial for the accuracy of the entire brain analysis process. The state-of-the-art brain extraction techniques rely heavily on the accuracy of alignment or registration between brain atlases and query brain anatomy, and/or make assumptions about the image geometry, and therefore have limited success when these assumptions do not hold or image registration fails. With the aim of designing an accurate, learning-based, geometry-independent, and registration-free brain extraction tool, in this paper, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2-D patches of different window sizes. We consider two different architectures: 1) a voxelwise approach based on three parallel 2-D convolutional pathways for three different directions (axial, coronal, and sagittal) that implicitly learn 3-D image information without the need for computationally expensive 3-D convolutions and 2) a fully convolutional network based on the U-net architecture. Posterior probability maps generated by the networks are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain to extract it from non-brain tissue. The brain extraction results we have obtained from our CNNs are superior to the recently reported results in the literature on two publicly available benchmark data sets, namely, LPBA40 and OASIS, in which we obtained the Dice overlap coefficients of 97.73% and 97.62%, respectively. Significant improvement was achieved via our auto-context algorithm. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily oriented fetal brains in reconstructed fetal brain magnetic

  12. Linear-regression convolutional neural network for fully automated coronary lumen segmentation in intravascular optical coherence tomography.

    Science.gov (United States)

    Yong, Yan Ling; Tan, Li Kuo; McLaughlin, Robert A; Chee, Kok Han; Liew, Yih Miin

    2017-12-01

    Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  13. Quantum error-correcting code for ternary logic

    Science.gov (United States)

    Majumdar, Ritajit; Basu, Saikat; Ghosh, Shibashis; Sur-Kolay, Susmita

    2018-05-01

    Ternary quantum systems are being studied because they provide more computational state space per unit of information, known as qutrit. A qutrit has three basis states, thus a qubit may be considered as a special case of a qutrit where the coefficient of one of the basis states is zero. Hence both (2 ×2 ) -dimensional and (3 ×3 ) -dimensional Pauli errors can occur on qutrits. In this paper, we (i) explore the possible (2 ×2 ) -dimensional as well as (3 ×3 ) -dimensional Pauli errors in qutrits and show that any pairwise bit swap error can be expressed as a linear combination of shift errors and phase errors, (ii) propose a special type of error called a quantum superposition error and show its equivalence to arbitrary rotation, (iii) formulate a nine-qutrit code which can correct a single error in a qutrit, and (iv) provide its stabilizer and circuit realization.

  14. Effects and Correction of Closed Orbit Magnet Errors in the SNS Ring

    Energy Technology Data Exchange (ETDEWEB)

    Bunch, S.C.; Holmes, J.

    2004-01-01

    We consider the effect and correction of three types of orbit errors in SNS: quadrupole displacement errors, dipole displacement errors, and dipole field errors. Using the ORBIT beam dynamics code, we focus on orbit deflection of a standard pencil beam and on beam losses in a high intensity injection simulation. We study the correction of these orbit errors using the proposed system of 88 (44 horizontal and 44 vertical) ring beam position monitors (BPMs) and 52 (24 horizontal and 28 vertical) dipole corrector magnets. Correction is carried out numerically by adjusting the kick strengths of the dipole corrector magnets to minimize the sum of the squares of the BPM signals for the pencil beam. In addition to using the exact BPM signals as input to the correction algorithm, we also consider the effect of random BPM signal errors. For all three types of error and for perturbations of individual magnets, the correction algorithm always chooses the three-bump method to localize the orbit displacement to the region between the magnet and its adjacent correctors. The values of the BPM signals resulting from specified settings of the dipole corrector kick strengths can be used to set up the orbit response matrix, which can then be applied to the correction in the limit that the signals from the separate errors add linearly. When high intensity calculations are carried out to study beam losses, it is seen that the SNS orbit correction system, even with BPM uncertainties, is sufficient to correct losses to less than 10-4 in nearly all cases, even those for which uncorrected losses constitute a large portion of the beam.

  15. Black Holes, Holography, and Quantum Error Correction

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    How can it be that a local quantum field theory in some number of spacetime dimensions can "fake" a local gravitational theory in a higher number of dimensions?  How can the Ryu-Takayanagi Formula say that an entropy is equal to the expectation value of a local operator?  Why do such things happen only in gravitational theories?  In this talk I will explain how a new interpretation of the AdS/CFT correspondence as a quantum error correcting code provides satisfying answers to these questions, and more generally gives a natural way of generating simple models of the correspondence.  No familiarity with AdS/CFT or quantum error correction is assumed, but the former would still be helpful.  

  16. Metaheuristic Algorithms for Convolution Neural Network.

    Science.gov (United States)

    Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent).

  17. Metaheuristic Algorithms for Convolution Neural Network

    Directory of Open Access Journals (Sweden)

    L. M. Rasdi Rere

    2016-01-01

    Full Text Available A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN, a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent.

  18. Exploring the effects of transducer models when training convolutional neural networks to eliminate reflection artifacts in experimental photoacoustic images

    Science.gov (United States)

    Allman, Derek; Reiter, Austin; Bell, Muyinatu

    2018-02-01

    We previously proposed a method of removing reflection artifacts in photoacoustic images that uses deep learning. Our approach generally relies on using simulated photoacoustic channel data to train a convolutional neural network (CNN) that is capable of distinguishing sources from artifacts based on unique differences in their spatial impulse responses (manifested as depth-based differences in wavefront shapes). In this paper, we directly compare a CNN trained with our previous continuous transducer model to a CNN trained with an updated discrete acoustic receiver model that more closely matches an experimental ultrasound transducer. These two CNNs were trained with simulated data and tested on experimental data. The CNN trained using the continuous receiver model correctly classified 100% of sources and 70.3% of artifacts in the experimental data. In contrast, the CNN trained using the discrete receiver model correctly classified 100% of sources and 89.7% of artifacts in the experimental images. The 19.4% increase in artifact classification accuracy indicates that an acoustic receiver model that closely mimics the experimental transducer plays an important role in improving the classification of artifacts in experimental photoacoustic data. Results are promising for developing a method to display CNN-based images that remove artifacts in addition to only displaying network-identified sources as previously proposed.

  19. ecco: An error correcting comparator theory.

    Science.gov (United States)

    Ghirlanda, Stefano

    2018-03-08

    Building on the work of Ralph Miller and coworkers (Miller and Matzel, 1988; Denniston et al., 2001; Stout and Miller, 2007), I propose a new formalization of the comparator hypothesis that seeks to overcome some shortcomings of existing formalizations. The new model, dubbed ecco for "Error-Correcting COmparisons," retains the comparator process and the learning of CS-CS associations based on contingency. ecco assumes, however, that learning of CS-US associations is driven by total error correction, as first introduced by Rescorla and Wagner (1972). I explore ecco's behavior in acquisition, compound conditioning, blocking, backward blocking, and unovershadowing. In these paradigms, ecco appears capable of avoiding the problems of current comparator models, such as the inability to solve some discriminations and some paradoxical effects of stimulus salience. At the same time, ecco exhibits the retrospective revaluation phenomena that are characteristic of comparator theory. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Using Convolutional Neural Network Filters to Measure Left-Right Mirror Symmetry in Images

    Directory of Open Access Journals (Sweden)

    Anselm Brachmann

    2016-12-01

    Full Text Available We propose a method for measuring symmetry in images by using filter responses from Convolutional Neural Networks (CNNs. The aim of the method is to model human perception of left/right symmetry as closely as possible. Using the Convolutional Neural Network (CNN approach has two main advantages: First, CNN filter responses closely match the responses of neurons in the human visual system; they take information on color, edges and texture into account simultaneously. Second, we can measure higher-order symmetry, which relies not only on color, edges and texture, but also on the shapes and objects that are depicted in images. We validated our algorithm on a dataset of 300 music album covers, which were rated according to their symmetry by 20 human observers, and compared results with those from a previously proposed method. With our method, human perception of symmetry can be predicted with high accuracy. Moreover, we demonstrate that the inclusion of features from higher CNN layers, which encode more abstract image content, increases the performance further. In conclusion, we introduce a model of left/right symmetry that closely models human perception of symmetry in CD album covers.

  1. Squeeze-SegNet: a new fast deep convolutional neural network for semantic segmentation

    Science.gov (United States)

    Nanfack, Geraldin; Elhassouny, Azeddine; Oulad Haj Thami, Rachid

    2018-04-01

    The recent researches in Deep Convolutional Neural Network have focused their attention on improving accuracy that provide significant advances. However, if they were limited to classification tasks, nowadays with contributions from Scientific Communities who are embarking in this field, they have become very useful in higher level tasks such as object detection and pixel-wise semantic segmentation. Thus, brilliant ideas in the field of semantic segmentation with deep learning have completed the state of the art of accuracy, however this architectures become very difficult to apply in embedded systems as is the case for autonomous driving. We present a new Deep fully Convolutional Neural Network for pixel-wise semantic segmentation which we call Squeeze-SegNet. The architecture is based on Encoder-Decoder style. We use a SqueezeNet-like encoder and a decoder formed by our proposed squeeze-decoder module and upsample layer using downsample indices like in SegNet and we add a deconvolution layer to provide final multi-channel feature map. On datasets like Camvid or City-states, our net gets SegNet-level accuracy with less than 10 times fewer parameters than SegNet.

  2. Automatic sleep stage classification of single-channel EEG by using complex-valued convolutional neural network.

    Science.gov (United States)

    Zhang, Junming; Wu, Yan

    2018-03-28

    Many systems are developed for automatic sleep stage classification. However, nearly all models are based on handcrafted features. Because of the large feature space, there are so many features that feature selection should be used. Meanwhile, designing handcrafted features is a difficult and time-consuming task because the feature designing needs domain knowledge of experienced experts. Results vary when different sets of features are chosen to identify sleep stages. Additionally, many features that we may be unaware of exist. However, these features may be important for sleep stage classification. Therefore, a new sleep stage classification system, which is based on the complex-valued convolutional neural network (CCNN), is proposed in this study. Unlike the existing sleep stage methods, our method can automatically extract features from raw electroencephalography data and then classify sleep stage based on the learned features. Additionally, we also prove that the decision boundaries for the real and imaginary parts of a complex-valued convolutional neuron intersect orthogonally. The classification performances of handcrafted features are compared with those of learned features via CCNN. Experimental results show that the proposed method is comparable to the existing methods. CCNN obtains a better classification performance and considerably faster convergence speed than convolutional neural network. Experimental results also show that the proposed method is a useful decision-support tool for automatic sleep stage classification.

  3. Scalable error correction in distributed ion trap computers

    International Nuclear Information System (INIS)

    Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.

    2006-01-01

    A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment

  4. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network

    Science.gov (United States)

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-01-01

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy. PMID:29231868

  5. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network.

    Science.gov (United States)

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-12-12

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy.

  6. Features of an Error Correction Memory to Enhance Technical Texts Authoring in LELIE

    Directory of Open Access Journals (Sweden)

    Patrick SAINT-DIZIER

    2015-12-01

    Full Text Available In this paper, we investigate the notion of error correction memory applied to technical texts. The main purpose is to introduce flexibility and context sensitivity in the detection and the correction of errors related to Constrained Natural Language (CNL principles. This is realized by enhancing error detection paired with relatively generic correction patterns and contextual correction recommendations. Patterns are induced from previous corrections made by technical writers for a given type of text. The impact of such an error correction memory is also investigated from the point of view of the technical writer's cognitive activity. The notion of error correction memory is developed within the framework of the LELIE project an experiment is carried out on the case of fuzzy lexical items and negation, which are both major problems in technical writing. Language processing and knowledge representation aspects are developed together with evaluation directions.

  7. Automatic QRS complex detection using two-level convolutional neural network.

    Science.gov (United States)

    Xiang, Yande; Lin, Zhitao; Meng, Jianyi

    2018-01-29

    The QRS complex is the most noticeable feature in the electrocardiogram (ECG) signal, therefore, its detection is critical for ECG signal analysis. The existing detection methods largely depend on hand-crafted manual features and parameters, which may introduce significant computational complexity, especially in the transform domains. In addition, fixed features and parameters are not suitable for detecting various kinds of QRS complexes under different circumstances. In this study, based on 1-D convolutional neural network (CNN), an accurate method for QRS complex detection is proposed. The CNN consists of object-level and part-level CNNs for extracting different grained ECG morphological features automatically. All the extracted morphological features are used by multi-layer perceptron (MLP) for QRS complex detection. Additionally, a simple ECG signal preprocessing technique which only contains difference operation in temporal domain is adopted. Based on the MIT-BIH arrhythmia (MIT-BIH-AR) database, the proposed detection method achieves overall sensitivity Sen = 99.77%, positive predictivity rate PPR = 99.91%, and detection error rate DER = 0.32%. In addition, the performance variation is performed according to different signal-to-noise ratio (SNR) values. An automatic QRS detection method using two-level 1-D CNN and simple signal preprocessing technique is proposed for QRS complex detection. Compared with the state-of-the-art QRS complex detection approaches, experimental results show that the proposed method acquires comparable accuracy.

  8. Multiscale Rotation-Invariant Convolutional Neural Networks for Lung Texture Classification.

    Science.gov (United States)

    Wang, Qiangchang; Zheng, Yuanjie; Yang, Gongping; Jin, Weidong; Chen, Xinjian; Yin, Yilong

    2018-01-01

    We propose a new multiscale rotation-invariant convolutional neural network (MRCNN) model for classifying various lung tissue types on high-resolution computed tomography. MRCNN employs Gabor-local binary pattern that introduces a good property in image analysis-invariance to image scales and rotations. In addition, we offer an approach to deal with the problems caused by imbalanced number of samples between different classes in most of the existing works, accomplished by changing the overlapping size between the adjacent patches. Experimental results on a public interstitial lung disease database show a superior performance of the proposed method to state of the art.

  9. Traffic sign classification with dataset augmentation and convolutional neural network

    Science.gov (United States)

    Tang, Qing; Kurnianggoro, Laksono; Jo, Kang-Hyun

    2018-04-01

    This paper presents a method for traffic sign classification using a convolutional neural network (CNN). In this method, firstly we transfer a color image into grayscale, and then normalize it in the range (-1,1) as the preprocessing step. To increase robustness of classification model, we apply a dataset augmentation algorithm and create new images to train the model. To avoid overfitting, we utilize a dropout module before the last fully connection layer. To assess the performance of the proposed method, the German traffic sign recognition benchmark (GTSRB) dataset is utilized. Experimental results show that the method is effective in classifying traffic signs.

  10. Tooth labeling in cone-beam CT using deep convolutional neural network for forensic identification

    Science.gov (United States)

    Miki, Yuma; Muramatsu, Chisako; Hayashi, Tatsuro; Zhou, Xiangrong; Hara, Takeshi; Katsumata, Akitoshi; Fujita, Hiroshi

    2017-03-01

    In large disasters, dental record plays an important role in forensic identification. However, filing dental charts for corpses is not an easy task for general dentists. Moreover, it is laborious and time-consuming work in cases of large scale disasters. We have been investigating a tooth labeling method on dental cone-beam CT images for the purpose of automatic filing of dental charts. In our method, individual tooth in CT images are detected and classified into seven tooth types using deep convolutional neural network. We employed the fully convolutional network using AlexNet architecture for detecting each tooth and applied our previous method using regular AlexNet for classifying the detected teeth into 7 tooth types. From 52 CT volumes obtained by two imaging systems, five images each were randomly selected as test data, and the remaining 42 cases were used as training data. The result showed the tooth detection accuracy of 77.4% with the average false detection of 5.8 per image. The result indicates the potential utility of the proposed method for automatic recording of dental information.

  11. APPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN CLASSIFICATION OF HIGH RESOLUTION AGRICULTURAL REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    C. Yao

    2017-09-01

    Full Text Available With the rapid development of Precision Agriculture (PA promoted by high-resolution remote sensing, it makes significant sense in management and estimation of agriculture through crop classification of high-resolution remote sensing image. Due to the complex and fragmentation of the features and the surroundings in the circumstance of high-resolution, the accuracy of the traditional classification methods has not been able to meet the standard of agricultural problems. In this case, this paper proposed a classification method for high-resolution agricultural remote sensing images based on convolution neural networks(CNN. For training, a large number of training samples were produced by panchromatic images of GF-1 high-resolution satellite of China. In the experiment, through training and testing on the CNN under the toolbox of deep learning by MATLAB, the crop classification finally got the correct rate of 99.66 % after the gradual optimization of adjusting parameter during training. Through improving the accuracy of image classification and image recognition, the applications of CNN provide a reference value for the field of remote sensing in PA.

  12. tf_unet: Generic convolutional neural network U-Net implementation in Tensorflow

    Science.gov (United States)

    Akeret, Joel; Chang, Chihway; Lucchi, Aurelien; Refregier, Alexandre

    2016-11-01

    tf_unet mitigates radio frequency interference (RFI) signals in radio data using a special type of Convolutional Neural Network, the U-Net, that enables the classification of clean signal and RFI signatures in 2D time-ordered data acquired from a radio telescope. The code is not tied to a specific segmentation and can be used, for example, to detect radio frequency interference (RFI) in radio astronomy or galaxies and stars in widefield imaging data. This U-Net implementation can outperform classical RFI mitigation algorithms.

  13. Spatially coupled low-density parity-check error correction for holographic data storage

    Science.gov (United States)

    Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro

    2017-09-01

    The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.

  14. Correcting for particle counting bias error in turbulent flow

    Science.gov (United States)

    Edwards, R. V.; Baratuci, W.

    1985-01-01

    An ideal seeding device is proposed generating particles that exactly follow the flow out are still a major source of error, i.e., with a particle counting bias wherein the probability of measuring velocity is a function of velocity. The error in the measured mean can be as much as 25%. Many schemes have been put forward to correct for this error, but there is not universal agreement as to the acceptability of any one method. In particular it is sometimes difficult to know if the assumptions required in the analysis are fulfilled by any particular flow measurement system. To check various correction mechanisms in an ideal way and to gain some insight into how to correct with the fewest initial assumptions, a computer simulation is constructed to simulate laser anemometer measurements in a turbulent flow. That simulator and the results of its use are discussed.

  15. Multi-scale Fully Convolutional Network for Face Detection in the Wild

    KAUST Repository

    Bai, Yancheng

    2017-08-24

    Face detection is a classical problem in computer vision. It is still a difficult task due to many nuisances that naturally occur in the wild. In this paper, we propose a multi-scale fully convolutional network for face detection. To reduce computation, the intermediate convolutional feature maps (conv) are shared by every scale model. We up-sample and down-sample the final conv map to approximate K levels of a feature pyramid, leading to a wide range of face scales that can be detected. At each feature pyramid level, a FCN is trained end-to-end to deal with faces in a small range of scale change. Because of the up-sampling, our method can detect very small faces (10×10 pixels). We test our MS-FCN detector on four public face detection datasets, including FDDB, WIDER FACE, AFW and PASCAL FACE. Extensive experiments show that it outperforms state-of-the-art methods. Also, MS-FCN runs at 23 FPS on a GPU for images of size 640×480 with no assumption on the minimum detectable face size.

  16. Deep convolutional neural networks for annotating gene expression patterns in the mouse brain.

    Science.gov (United States)

    Zeng, Tao; Li, Rongjian; Mukkamala, Ravi; Ye, Jieping; Ji, Shuiwang

    2015-05-07

    Profiling gene expression in brain structures at various spatial and temporal scales is essential to understanding how genes regulate the development of brain structures. The Allen Developing Mouse Brain Atlas provides high-resolution 3-D in situ hybridization (ISH) gene expression patterns in multiple developing stages of the mouse brain. Currently, the ISH images are annotated with anatomical terms manually. In this paper, we propose a computational approach to annotate gene expression pattern images in the mouse brain at various structural levels over the course of development. We applied deep convolutional neural network that was trained on a large set of natural images to extract features from the ISH images of developing mouse brain. As a baseline representation, we applied invariant image feature descriptors to capture local statistics from ISH images and used the bag-of-words approach to build image-level representations. Both types of features from multiple ISH image sections of the entire brain were then combined to build 3-D, brain-wide gene expression representations. We employed regularized learning methods for discriminating gene expression patterns in different brain structures. Results show that our approach of using convolutional model as feature extractors achieved superior performance in annotating gene expression patterns at multiple levels of brain structures throughout four developing ages. Overall, we achieved average AUC of 0.894 ± 0.014, as compared with 0.820 ± 0.046 yielded by the bag-of-words approach. Deep convolutional neural network model trained on natural image sets and applied to gene expression pattern annotation tasks yielded superior performance, demonstrating its transfer learning property is applicable to such biological image sets.

  17. Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors

    Science.gov (United States)

    Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.

    2018-04-01

    The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.

  18. Developing convolutional neural networks for measuring climate change opinions from social media data

    Science.gov (United States)

    Mao, H.; Bhaduri, B. L.

    2016-12-01

    Understanding public opinions on climate change is important for policy making. Public opinion, however, is typically measured with national surveys, which are often too expensive and thus being updated at a low frequency. Twitter has become a major platform for people to express their opinions on social and political issues. Our work attempts to understand if Twitter data can provide complimentary insights about climate change perceptions. Since the nature of social media is real-time, this data source can especially help us understand how public opinion changes over time in response to climate events and hazards, which though is very difficult to be captured by manual surveys. We use the Twitter Streaming API to collect tweets that contain keywords, "climate change" or "#climatechange". Traditional machine-learning based opinion mining algorithms require a significant amount of labeled data. Data labeling is notoriously time consuming. To address this problem, we use hashtags (a significant feature used to mark topics of tweets) to annotate tweets automatically. For example, hashtags, #climatedenial and #climatescam, are negative opinion labels, while #actonclimate and #climateaction are positive. Following this method, we can obtain a large amount of training data without human labor. This labeled dataset is used to train a deep convolutional neural network that classifies tweets into positive (i.e. believe in climate change) and negative (i.e. do not believe). Based on the positive/negative tweets obtained, we will further analyze risk perceptions and opinions towards policy support. In addition, we analyze twitter user profiles to understand the demographics of proponents and opponents of climate change. Deep learning techniques, especially convolutional deep neural networks, have achieved much success in computer vision. In this work, we propose a convolutional neural network architecture for understanding opinions within text. This method is compared with

  19. Error-finding and error-correcting methods for the start-up of the SLC

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper

  20. A deep convolutional neural network model to classify heartbeats.

    Science.gov (United States)

    Acharya, U Rajendra; Oh, Shu Lih; Hagiwara, Yuki; Tan, Jen Hong; Adam, Muhammad; Gertych, Arkadiusz; Tan, Ru San

    2017-10-01

    The electrocardiogram (ECG) is a standard test used to monitor the activity of the heart. Many cardiac abnormalities will be manifested in the ECG including arrhythmia which is a general term that refers to an abnormal heart rhythm. The basis of arrhythmia diagnosis is the identification of normal versus abnormal individual heart beats, and their correct classification into different diagnoses, based on ECG morphology. Heartbeats can be sub-divided into five categories namely non-ectopic, supraventricular ectopic, ventricular ectopic, fusion, and unknown beats. It is challenging and time-consuming to distinguish these heartbeats on ECG as these signals are typically corrupted by noise. We developed a 9-layer deep convolutional neural network (CNN) to automatically identify 5 different categories of heartbeats in ECG signals. Our experiment was conducted in original and noise attenuated sets of ECG signals derived from a publicly available database. This set was artificially augmented to even out the number of instances the 5 classes of heartbeats and filtered to remove high-frequency noise. The CNN was trained using the augmented data and achieved an accuracy of 94.03% and 93.47% in the diagnostic classification of heartbeats in original and noise free ECGs, respectively. When the CNN was trained with highly imbalanced data (original dataset), the accuracy of the CNN reduced to 89.07%% and 89.3% in noisy and noise-free ECGs. When properly trained, the proposed CNN model can serve as a tool for screening of ECG to quickly identify different types and frequency of arrhythmic heartbeats. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Deep convolutional neural networks for building extraction from orthoimages and dense image matching point clouds

    Science.gov (United States)

    Maltezos, Evangelos; Doulamis, Nikolaos; Doulamis, Anastasios; Ioannidis, Charalabos

    2017-10-01

    Automatic extraction of buildings from remote sensing data is an attractive research topic, useful for several applications, such as cadastre and urban planning. This is mainly due to the inherent artifacts of the used data and the differences in viewpoint, surrounding environment, and complex shape and size of the buildings. This paper introduces an efficient deep learning framework based on convolutional neural networks (CNNs) toward building extraction from orthoimages. In contrast to conventional deep approaches in which the raw image data are fed as input to the deep neural network, in this paper the height information is exploited as an additional feature being derived from the application of a dense image matching algorithm. As test sites, several complex urban regions of various types of buildings, pixel resolutions and types of data are used, located in Vaihingen in Germany and in Perissa in Greece. Our method is evaluated using the rates of completeness, correctness, and quality and compared with conventional and other "shallow" learning paradigms such as support vector machines. Experimental results indicate that a combination of raw image data with height information, feeding as input to a deep CNN model, provides potentials in building detection in terms of robustness, flexibility, and efficiency.

  2. Deep learning in breast cancer risk assessment: evaluation of convolutional neural networks on a clinical dataset of full-field digital mammograms.

    Science.gov (United States)

    Li, Hui; Giger, Maryellen L; Huynh, Benjamin Q; Antropova, Natalia O

    2017-10-01

    To evaluate deep learning in the assessment of breast cancer risk in which convolutional neural networks (CNNs) with transfer learning are used to extract parenchymal characteristics directly from full-field digital mammographic (FFDM) images instead of using computerized radiographic texture analysis (RTA), 456 clinical FFDM cases were included: a "high-risk" BRCA1/2 gene-mutation carriers dataset (53 cases), a "high-risk" unilateral cancer patients dataset (75 cases), and a "low-risk dataset" (328 cases). Deep learning was compared to the use of features from RTA, as well as to a combination of both in the task of distinguishing between high- and low-risk subjects. Similar classification performances were obtained using CNN [area under the curve [Formula: see text]; standard error [Formula: see text

  3. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction considered. A simulation study shows that the fi…nite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  4. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  5. Environment-assisted error correction of single-qubit phase damping

    International Nuclear Information System (INIS)

    Trendelkamp-Schroer, Benjamin; Helm, Julius; Strunz, Walter T.

    2011-01-01

    Open quantum system dynamics of random unitary type may in principle be fully undone. Closely following the scheme of environment-assisted error correction proposed by Gregoratti and Werner [J. Mod. Opt. 50, 915 (2003)], we explicitly carry out all steps needed to invert a phase-damping error on a single qubit. Furthermore, we extend the scheme to a mixed-state environment. Surprisingly, we find cases for which the uncorrected state is closer to the desired state than any of the corrected ones.

  6. Improving transcriptome assembly through error correction of high-throughput sequence reads

    Directory of Open Access Journals (Sweden)

    Matthew D. MacManes

    2013-07-01

    Full Text Available The study of functional genomics, particularly in non-model organisms, has been dramatically improved over the last few years by the use of transcriptomes and RNAseq. While these studies are potentially extremely powerful, a computationally intensive procedure, the de novo construction of a reference transcriptome must be completed as a prerequisite to further analyses. The accurate reference is critically important as all downstream steps, including estimating transcript abundance are critically dependent on the construction of an accurate reference. Though a substantial amount of research has been done on assembly, only recently have the pre-assembly procedures been studied in detail. Specifically, several stand-alone error correction modules have been reported on and, while they have shown to be effective in reducing errors at the level of sequencing reads, how error correction impacts assembly accuracy is largely unknown. Here, we show via use of a simulated and empiric dataset, that applying error correction to sequencing reads has significant positive effects on assembly accuracy, and should be applied to all datasets. A complete collection of commands which will allow for the production of Reptile corrected reads is available at https://github.com/macmanes/error_correction/tree/master/scripts and as File S1.

  7. A Novel Image Tag Completion Method Based on Convolutional Neural Transformation

    KAUST Repository

    Geng, Yanyan; Zhang, Guohui; Li, Weizhi; Gu, Yi; Liang, Ru-Ze; Liang, Gaoyuan; Wang, Jingbin; Wu, Yanbin; Patil, Nitin; Wang, Jing-Yan

    2017-01-01

    In the problems of image retrieval and annotation, complete textual tag lists of images play critical roles. However, in real-world applications, the image tags are usually incomplete, thus it is important to learn the complete tags for images. In this paper, we study the problem of image tag complete and proposed a novel method for this problem based on a popular image representation method, convolutional neural network (CNN). The method estimates the complete tags from the convolutional filtering outputs of images based on a linear predictor. The CNN parameters, linear predictor, and the complete tags are learned jointly by our method. We build a minimization problem to encourage the consistency between the complete tags and the available incomplete tags, reduce the estimation error, and reduce the model complexity. An iterative algorithm is developed to solve the minimization problem. Experiments over benchmark image data sets show its effectiveness.

  8. A Novel Image Tag Completion Method Based on Convolutional Neural Transformation

    KAUST Repository

    Geng, Yanyan

    2017-10-24

    In the problems of image retrieval and annotation, complete textual tag lists of images play critical roles. However, in real-world applications, the image tags are usually incomplete, thus it is important to learn the complete tags for images. In this paper, we study the problem of image tag complete and proposed a novel method for this problem based on a popular image representation method, convolutional neural network (CNN). The method estimates the complete tags from the convolutional filtering outputs of images based on a linear predictor. The CNN parameters, linear predictor, and the complete tags are learned jointly by our method. We build a minimization problem to encourage the consistency between the complete tags and the available incomplete tags, reduce the estimation error, and reduce the model complexity. An iterative algorithm is developed to solve the minimization problem. Experiments over benchmark image data sets show its effectiveness.

  9. Knowledge Based 3d Building Model Recognition Using Convolutional Neural Networks from LIDAR and Aerial Imageries

    Science.gov (United States)

    Alidoost, F.; Arefi, H.

    2016-06-01

    In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.

  10. Gold price effect on stock market: A Markov switching vector error correction approach

    Science.gov (United States)

    Wai, Phoong Seuk; Ismail, Mohd Tahir; Kun, Sek Siok

    2014-06-01

    Gold is a popular precious metal where the demand is driven not only for practical use but also as a popular investments commodity. While stock market represents a country growth, thus gold price effect on stock market behavior as interest in the study. Markov Switching Vector Error Correction Models are applied to analysis the relationship between gold price and stock market changes since real financial data always exhibit regime switching, jumps or missing data through time. Besides, there are numerous specifications of Markov Switching Vector Error Correction Models and this paper will compare the intercept adjusted Markov Switching Vector Error Correction Model and intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model to determine the best model representation in capturing the transition of the time series. Results have shown that gold price has a positive relationship with Malaysia, Thailand and Indonesia stock market and a two regime intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model is able to provide the more significance and reliable result compare to intercept adjusted Markov Switching Vector Error Correction Models.

  11. Detection and correction of prescription errors by an emergency department pharmacy service.

    Science.gov (United States)

    Stasiak, Philip; Afilalo, Marc; Castelino, Tanya; Xue, Xiaoqing; Colacone, Antoinette; Soucy, Nathalie; Dankoff, Jerrald

    2014-05-01

    Emergency departments (EDs) are recognized as a high-risk setting for prescription errors. Pharmacist involvement may be important in reviewing prescriptions to identify and correct errors. The objectives of this study were to describe the frequency and type of prescription errors detected by pharmacists in EDs, determine the proportion of errors that could be corrected, and identify factors associated with prescription errors. This prospective observational study was conducted in a tertiary care teaching ED on 25 consecutive weekdays. Pharmacists reviewed all documented prescriptions and flagged and corrected errors for patients in the ED. We collected information on patient demographics, details on prescription errors, and the pharmacists' recommendations. A total of 3,136 ED prescriptions were reviewed. The proportion of prescriptions in which a pharmacist identified an error was 3.2% (99 of 3,136; 95% confidence interval [CI] 2.5-3.8). The types of identified errors were wrong dose (28 of 99, 28.3%), incomplete prescription (27 of 99, 27.3%), wrong frequency (15 of 99, 15.2%), wrong drug (11 of 99, 11.1%), wrong route (1 of 99, 1.0%), and other (17 of 99, 17.2%). The pharmacy service intervened and corrected 78 (78 of 99, 78.8%) errors. Factors associated with prescription errors were patient age over 65 (odds ratio [OR] 2.34; 95% CI 1.32-4.13), prescriptions with more than one medication (OR 5.03; 95% CI 2.54-9.96), and those written by emergency medicine residents compared to attending emergency physicians (OR 2.21, 95% CI 1.18-4.14). Pharmacists in a tertiary ED are able to correct the majority of prescriptions in which they find errors. Errors are more likely to be identified in prescriptions written for older patients, those containing multiple medication orders, and those prescribed by emergency residents.

  12. Deep learning with convolutional neural networks: a resource for the control of robotic prosthetic hands via electromyography

    Directory of Open Access Journals (Sweden)

    Manfredo Atzori

    2016-09-01

    Full Text Available Motivation: Natural control methods based on surface electromyography and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications and commercial prostheses are in the best case capable to offer natural control for only a few movements. Objective: In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its capabilities for the natural control of robotic hands via surface electromyography by providing a baseline on a large number of intact and amputated subjects. Methods: We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 hand amputated subjects. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets.Results: The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods but lower than the results obtained with the best reference methods in our tests. Significance: The results show that convolutional neural networks with a very simple architecture can produce accuracy comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters can be fundamental for the analysis of surface electromyography data. Finally, the results suggest that deeper and more complex networks may increase dexterous control robustness, thus contributing to bridge the gap between the market and scientific research

  13. ERRORS AND CORRECTIVE FEEDBACK IN WRITING: IMPLICATIONS TO OUR CLASSROOM PRACTICES

    Directory of Open Access Journals (Sweden)

    Maria Corazon Saturnina A Castro

    2017-10-01

    Full Text Available Error correction is one of the most contentious and misunderstood issues in both foreign and second language teaching. Despite varying positions on the effectiveness of error correction or the lack of it, corrective feedback remains an institution in the writing classes. Given this context, this action research endeavors to survey prevalent attitudes of teachers and students toward corrective feedback and examine their implications to classroom practices.  This paper poses the major problem:  How do teachers’ perspectives on corrective feedback match the students’ views and expectations about error treatment in their writing? Professors of the University of the Philippines who teach composition classes and over a hundred students enrolled in their classes were surveyed.  Results showed that there are differing perceptions of teachers and students regarding corrective feedback. These oppositions must be addressed as they have implications to current pedagogical practices which include constructing and establishing appropriate lesson goals, using alternative corrective strategies, teaching grammar points in class even in the tertiary level, and further understanding the learning process.

  14. Comparing Local Descriptors and Bags of Visual Words to Deep Convolutional Neural Networks for Plant Recognition

    NARCIS (Netherlands)

    Pawara, Pornntiwa; Okafor, Emmanuel; Surinta, Olarik; Schomaker, Lambertus; Wiering, Marco

    2017-01-01

    The use of machine learning and computer vision methods for recognizing different plants from images has attracted lots of attention from the community. This paper aims at comparing local feature descriptors and bags of visual words with different classifiers to deep convolutional neural networks

  15. Lunar Circular Structure Classification from Chang 'e 2 High Resolution Lunar Images with Convolutional Neural Network

    Science.gov (United States)

    Zeng, X. G.; Liu, J. J.; Zuo, W.; Chen, W. L.; Liu, Y. X.

    2018-04-01

    Circular structures are widely distributed around the lunar surface. The most typical of them could be lunar impact crater, lunar dome, et.al. In this approach, we are trying to use the Convolutional Neural Network to classify the lunar circular structures from the lunar images.

  16. Segmentation of histological images and fibrosis identification with a convolutional neural network.

    Science.gov (United States)

    Fu, Xiaohang; Liu, Tong; Xiong, Zhaohan; Smaill, Bruce H; Stiles, Martin K; Zhao, Jichao

    2018-05-16

    Segmentation of histological images is one of the most crucial tasks for many biomedical analyses involving quantification of certain tissue types, such as fibrosis via Masson's trichrome staining. However, challenges are posed by the high variability and complexity of structural features in such images, in addition to imaging artifacts. Further, the conventional approach of manual thresholding is labor-intensive, and highly sensitive to inter- and intra-image intensity variations. An accurate and robust automated segmentation method is of high interest. We propose and evaluate an elegant convolutional neural network (CNN) designed for segmentation of histological images, particularly those with Masson's trichrome stain. The network comprises 11 successive convolutional - rectified linear unit - batch normalization layers. It outperformed state-of-the-art CNNs on a dataset of cardiac histological images (labeling fibrosis, myocytes, and background) with a Dice similarity coefficient of 0.947. With 100 times fewer (only 300,000) trainable parameters than the state-of-the-art, our CNN is less susceptible to overfitting, and is efficient. Additionally, it retains image resolution from input to output, captures fine-grained details, and can be trained end-to-end smoothly. To the best of our knowledge, this is the first deep CNN tailored to the problem of concern, and may potentially be extended to solve similar segmentation tasks to facilitate investigations into pathology and clinical treatment. Copyright © 2018. Published by Elsevier Ltd.

  17. Fully automatic acute ischemic lesion segmentation in DWI using convolutional neural networks

    Directory of Open Access Journals (Sweden)

    Liang Chen

    2017-01-01

    Full Text Available Stroke is an acute cerebral vascular disease, which is likely to cause long-term disabilities and death. Acute ischemic lesions occur in most stroke patients. These lesions are treatable under accurate diagnosis and treatments. Although diffusion-weighted MR imaging (DWI is sensitive to these lesions, localizing and quantifying them manually is costly and challenging for clinicians. In this paper, we propose a novel framework to automatically segment stroke lesions in DWI. Our framework consists of two convolutional neural networks (CNNs: one is an ensemble of two DeconvNets (Noh et al., 2015, which is the EDD Net; the second CNN is the multi-scale convolutional label evaluation net (MUSCLE Net, which aims to evaluate the lesions detected by the EDD Net in order to remove potential false positives. To the best of our knowledge, it is the first attempt to solve this problem and using both CNNs achieves very good results. Furthermore, we study the network architectures and key configurations in detail to ensure the best performance. It is validated on a large dataset comprising clinical acquired DW images from 741 subjects. A mean accuracy of Dice coefficient obtained is 0.67 in total. The mean Dice scores based on subjects with only small and large lesions are 0.61 and 0.83, respectively. The lesion detection rate achieved is 0.94.

  18. Fully automatic acute ischemic lesion segmentation in DWI using convolutional neural networks.

    Science.gov (United States)

    Chen, Liang; Bentley, Paul; Rueckert, Daniel

    2017-01-01

    Stroke is an acute cerebral vascular disease, which is likely to cause long-term disabilities and death. Acute ischemic lesions occur in most stroke patients. These lesions are treatable under accurate diagnosis and treatments. Although diffusion-weighted MR imaging (DWI) is sensitive to these lesions, localizing and quantifying them manually is costly and challenging for clinicians. In this paper, we propose a novel framework to automatically segment stroke lesions in DWI. Our framework consists of two convolutional neural networks (CNNs): one is an ensemble of two DeconvNets (Noh et al., 2015), which is the EDD Net; the second CNN is the multi-scale convolutional label evaluation net (MUSCLE Net), which aims to evaluate the lesions detected by the EDD Net in order to remove potential false positives. To the best of our knowledge, it is the first attempt to solve this problem and using both CNNs achieves very good results. Furthermore, we study the network architectures and key configurations in detail to ensure the best performance. It is validated on a large dataset comprising clinical acquired DW images from 741 subjects. A mean accuracy of Dice coefficient obtained is 0.67 in total. The mean Dice scores based on subjects with only small and large lesions are 0.61 and 0.83, respectively. The lesion detection rate achieved is 0.94.

  19. Critical Neural Substrates for Correcting Unexpected Trajectory Errors and Learning from Them

    Science.gov (United States)

    Mutha, Pratik K.; Sainburg, Robert L.; Haaland, Kathleen Y.

    2011-01-01

    Our proficiency at any skill is critically dependent on the ability to monitor our performance, correct errors and adapt subsequent movements so that errors are avoided in the future. In this study, we aimed to dissociate the neural substrates critical for correcting unexpected trajectory errors and learning to adapt future movements based on…

  20. Entanglement and Quantum Error Correction with Superconducting Qubits

    Science.gov (United States)

    Reed, Matthew

    2015-03-01

    Quantum information science seeks to take advantage of the properties of quantum mechanics to manipulate information in ways that are not otherwise possible. Quantum computation, for example, promises to solve certain problems in days that would take a conventional supercomputer the age of the universe to decipher. This power does not come without a cost however, as quantum bits are inherently more susceptible to errors than their classical counterparts. Fortunately, it is possible to redundantly encode information in several entangled qubits, making it robust to decoherence and control imprecision with quantum error correction. I studied one possible physical implementation for quantum computing, employing the ground and first excited quantum states of a superconducting electrical circuit as a quantum bit. These ``transmon'' qubits are dispersively coupled to a superconducting resonator used for readout, control, and qubit-qubit coupling in the cavity quantum electrodynamics (cQED) architecture. In this talk I will give an general introduction to quantum computation and the superconducting technology that seeks to achieve it before explaining some of the specific results reported in my thesis. One major component is that of the first realization of three-qubit quantum error correction in a solid state device, where we encode one logical quantum bit in three entangled physical qubits and detect and correct phase- or bit-flip errors using a three-qubit Toffoli gate. My thesis is available at arXiv:1311.6759.

  1. Automatic Seismic-Event Classification with Convolutional Neural Networks.

    Science.gov (United States)

    Bueno Rodriguez, A.; Titos Luzón, M.; Garcia Martinez, L.; Benitez, C.; Ibáñez, J. M.

    2017-12-01

    Active volcanoes exhibit a wide range of seismic signals, providing vast amounts of unlabelled volcano-seismic data that can be analyzed through the lens of artificial intelligence. However, obtaining high-quality labelled data is time-consuming and expensive. Deep neural networks can process data in their raw form, compute high-level features and provide a better representation of the input data distribution. These systems can be deployed to classify seismic data at scale, enhance current early-warning systems and build extensive seismic catalogs. In this research, we aim to classify spectrograms from seven different seismic events registered at "Volcán de Fuego" (Colima, Mexico), during four eruptive periods. Our approach is based on convolutional neural networks (CNNs), a sub-type of deep neural networks that can exploit grid structure from the data. Volcano-seismic signals can be mapped into a grid-like structure using the spectrogram: a representation of the temporal evolution in terms of time and frequency. Spectrograms were computed from the data using Hamming windows with 4 seconds length, 2.5 seconds overlapping and 128 points FFT resolution. Results are compared to deep neural networks, random forest and SVMs. Experiments show that CNNs can exploit temporal and frequency information, attaining a classification accuracy of 93%, similar to deep networks 91% but outperforming SVM and random forest. These results empirically show that CNNs are powerful models to classify a wide range of volcano-seismic signals, and achieve good generalization. Furthermore, volcano-seismic spectrograms contains useful discriminative information for the CNN, as higher layers of the network combine high-level features computed for each frequency band, helping to detect simultaneous events in time. Being at the intersection of deep learning and geophysics, this research enables future studies of how CNNs can be used in volcano monitoring to accurately determine the detection and

  2. Drug-Drug Interaction Extraction via Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Shengyu Liu

    2016-01-01

    Full Text Available Drug-drug interaction (DDI extraction as a typical relation extraction task in natural language processing (NLP has always attracted great attention. Most state-of-the-art DDI extraction systems are based on support vector machines (SVM with a large number of manually defined features. Recently, convolutional neural networks (CNN, a robust machine learning method which almost does not need manually defined features, has exhibited great potential for many NLP tasks. It is worth employing CNN for DDI extraction, which has never been investigated. We proposed a CNN-based method for DDI extraction. Experiments conducted on the 2013 DDIExtraction challenge corpus demonstrate that CNN is a good choice for DDI extraction. The CNN-based DDI extraction method achieves an F-score of 69.75%, which outperforms the existing best performing method by 2.75%.

  3. Error Field Correction in DIII-D Ohmic Plasmas With Either Handedness

    International Nuclear Information System (INIS)

    Park, Jong-Kyu; Schaffer, Michael J.; La Haye, Robert J.; Scoville, Timothy J.; Menard, Jonathan E.

    2011-01-01

    Error field correction results in DIII-D plasmas are presented in various configurations. In both left-handed and right-handed plasma configurations, where the intrinsic error fields become different due to the opposite helical twist (handedness) of the magnetic field, the optimal error correction currents and the toroidal phases of internal(I)-coils are empirically established. Applications of the Ideal Perturbed Equilibrium Code to these results demonstrate that the field component to be minimized is not the resonant component of the external field, but the total field including ideal plasma responses. Consistency between experiment and theory has been greatly improved along with the understanding of ideal plasma responses, but non-ideal plasma responses still need to be understood to achieve the reliable predictability in tokamak error field correction.

  4. Convolutional neural network features based change detection in satellite images

    Science.gov (United States)

    Mohammed El Amin, Arabi; Liu, Qingjie; Wang, Yunhong

    2016-07-01

    With the popular use of high resolution remote sensing (HRRS) satellite images, a huge research efforts have been placed on change detection (CD) problem. An effective feature selection method can significantly boost the final result. While hand-designed features have proven difficulties to design features that effectively capture high and mid-level representations, the recent developments in machine learning (Deep Learning) omit this problem by learning hierarchical representation in an unsupervised manner directly from data without human intervention. In this letter, we propose approaching the change detection problem from a feature learning perspective. A novel deep Convolutional Neural Networks (CNN) features based HR satellite images change detection method is proposed. The main guideline is to produce a change detection map directly from two images using a pretrained CNN. This method can omit the limited performance of hand-crafted features. Firstly, CNN features are extracted through different convolutional layers. Then, a concatenation step is evaluated after an normalization step, resulting in a unique higher dimensional feature map. Finally, a change map was computed using pixel-wise Euclidean distance. Our method has been validated on real bitemporal HRRS satellite images according to qualitative and quantitative analyses. The results obtained confirm the interest of the proposed method.

  5. Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning?

    OpenAIRE

    Tajbakhsh, Nima; Shin, Jae Y.; Gurudu, Suryakanth R.; Hurst, R. Todd; Kendall, Christopher B.; Gotway, Michael B.; Liang, Jianming

    2017-01-01

    Training a deep convolutional neural network (CNN) from scratch is difficult because it requires a large amount of labeled training data and a great deal of expertise to ensure proper convergence. A promising alternative is to fine-tune a CNN that has been pre-trained using, for instance, a large set of labeled natural images. However, the substantial differences between natural and medical images may advise against such knowledge transfer. In this paper, we seek to answer the following centr...

  6. Ordinal convolutional neural networks for predicting RDoC positive valence psychiatric symptom severity scores.

    Science.gov (United States)

    Rios, Anthony; Kavuluru, Ramakanth

    2017-11-01

    The CEGS N-GRID 2016 Shared Task in Clinical Natural Language Processing (NLP) provided a set of 1000 neuropsychiatric notes to participants as part of a competition to predict psychiatric symptom severity scores. This paper summarizes our methods, results, and experiences based on our participation in the second track of the shared task. Classical methods of text classification usually fall into one of three problem types: binary, multi-class, and multi-label classification. In this effort, we study ordinal regression problems with text data where misclassifications are penalized differently based on how far apart the ground truth and model predictions are on the ordinal scale. Specifically, we present our entries (methods and results) in the N-GRID shared task in predicting research domain criteria (RDoC) positive valence ordinal symptom severity scores (absent, mild, moderate, and severe) from psychiatric notes. We propose a novel convolutional neural network (CNN) model designed to handle ordinal regression tasks on psychiatric notes. Broadly speaking, our model combines an ordinal loss function, a CNN, and conventional feature engineering (wide features) into a single model which is learned end-to-end. Given interpretability is an important concern with nonlinear models, we apply a recent approach called locally interpretable model-agnostic explanation (LIME) to identify important words that lead to instance specific predictions. Our best model entered into the shared task placed third among 24 teams and scored a macro mean absolute error (MMAE) based normalized score (100·(1-MMAE)) of 83.86. Since the competition, we improved our score (using basic ensembling) to 85.55, comparable with the winning shared task entry. Applying LIME to model predictions, we demonstrate the feasibility of instance specific prediction interpretation by identifying words that led to a particular decision. In this paper, we present a method that successfully uses wide features and

  7. Correction of clock errors in seismic data using noise cross-correlations

    Science.gov (United States)

    Hable, Sarah; Sigloch, Karin; Barruol, Guilhem; Hadziioannou, Céline

    2017-04-01

    Correct and verifiable timing of seismic records is crucial for most seismological applications. For seismic land stations, frequent synchronization of the internal station clock with a GPS signal should ensure accurate timing, but loss of GPS synchronization is a common occurrence, especially for remote, temporary stations. In such cases, retrieval of clock timing has been a long-standing problem. The same timing problem applies to Ocean Bottom Seismometers (OBS), where no GPS signal can be received during deployment and only two GPS synchronizations can be attempted upon deployment and recovery. If successful, a skew correction is usually applied, where the final timing deviation is interpolated linearly across the entire operation period. If GPS synchronization upon recovery fails, then even this simple and unverified, first-order correction is not possible. In recent years, the usage of cross-correlation functions (CCFs) of ambient seismic noise has been demonstrated as a clock-correction method for certain network geometries. We demonstrate the great potential of this technique for island stations and OBS that were installed in the course of the Réunion Hotspot and Upper Mantle - Réunions Unterer Mantel (RHUM-RUM) project in the western Indian Ocean. Four stations on the island La Réunion were affected by clock errors of up to several minutes due to a missing GPS signal. CCFs are calculated for each day and compared with a reference cross-correlation function (RCF), which is usually the average of all CCFs. The clock error of each day is then determined from the measured shift between the daily CCFs and the RCF. To improve the accuracy of the method, CCFs are computed for several land stations and all three seismic components. Averaging over these station pairs and their 9 component pairs reduces the standard deviation of the clock errors by a factor of 4 (from 80 ms to 20 ms). This procedure permits a continuous monitoring of clock errors where small clock

  8. Implementation of random set-up errors in Monte Carlo calculated dynamic IMRT treatment plans

    International Nuclear Information System (INIS)

    Stapleton, S; Zavgorodni, S; Popescu, I A; Beckham, W A

    2005-01-01

    The fluence-convolution method for incorporating random set-up errors (RSE) into the Monte Carlo treatment planning dose calculations was previously proposed by Beckham et al, and it was validated for open field radiotherapy treatments. This study confirms the applicability of the fluence-convolution method for dynamic intensity modulated radiotherapy (IMRT) dose calculations and evaluates the impact of set-up uncertainties on a clinical IMRT dose distribution. BEAMnrc and DOSXYZnrc codes were used for Monte Carlo calculations. A sliding window IMRT delivery was simulated using a dynamic multi-leaf collimator (DMLC) transport model developed by Keall et al. The dose distributions were benchmarked for dynamic IMRT fields using extended dose range (EDR) film, accumulating the dose from 16 subsequent fractions shifted randomly. Agreement of calculated and measured relative dose values was well within statistical uncertainty. A clinical seven field sliding window IMRT head and neck treatment was then simulated and the effects of random set-up errors (standard deviation of 2 mm) were evaluated. The dose-volume histograms calculated in the PTV with and without corrections for RSE showed only small differences indicating a reduction of the volume of high dose region due to set-up errors. As well, it showed that adequate coverage of the PTV was maintained when RSE was incorporated. Slice-by-slice comparison of the dose distributions revealed differences of up to 5.6%. The incorporation of set-up errors altered the position of the hot spot in the plan. This work demonstrated validity of implementation of the fluence-convolution method to dynamic IMRT Monte Carlo dose calculations. It also showed that accounting for the set-up errors could be essential for correct identification of the value and position of the hot spot

  9. Implementation of random set-up errors in Monte Carlo calculated dynamic IMRT treatment plans

    Science.gov (United States)

    Stapleton, S.; Zavgorodni, S.; Popescu, I. A.; Beckham, W. A.

    2005-02-01

    The fluence-convolution method for incorporating random set-up errors (RSE) into the Monte Carlo treatment planning dose calculations was previously proposed by Beckham et al, and it was validated for open field radiotherapy treatments. This study confirms the applicability of the fluence-convolution method for dynamic intensity modulated radiotherapy (IMRT) dose calculations and evaluates the impact of set-up uncertainties on a clinical IMRT dose distribution. BEAMnrc and DOSXYZnrc codes were used for Monte Carlo calculations. A sliding window IMRT delivery was simulated using a dynamic multi-leaf collimator (DMLC) transport model developed by Keall et al. The dose distributions were benchmarked for dynamic IMRT fields using extended dose range (EDR) film, accumulating the dose from 16 subsequent fractions shifted randomly. Agreement of calculated and measured relative dose values was well within statistical uncertainty. A clinical seven field sliding window IMRT head and neck treatment was then simulated and the effects of random set-up errors (standard deviation of 2 mm) were evaluated. The dose-volume histograms calculated in the PTV with and without corrections for RSE showed only small differences indicating a reduction of the volume of high dose region due to set-up errors. As well, it showed that adequate coverage of the PTV was maintained when RSE was incorporated. Slice-by-slice comparison of the dose distributions revealed differences of up to 5.6%. The incorporation of set-up errors altered the position of the hot spot in the plan. This work demonstrated validity of implementation of the fluence-convolution method to dynamic IMRT Monte Carlo dose calculations. It also showed that accounting for the set-up errors could be essential for correct identification of the value and position of the hot spot.

  10. Appropriateness of Dropout Layers and Allocation of Their 0.5 Rates across Convolutional Neural Networks for CIFAR-10, EEACL26, and NORB Datasets

    OpenAIRE

    Romanuke Vadim V.

    2017-01-01

    A technique of DropOut for preventing overfitting of convolutional neural networks for image classification is considered in the paper. The goal is to find a rule of rationally allocating DropOut layers of 0.5 rate to maximise performance. To achieve the goal, two common network architectures are used having either 4 or 5 convolutional layers. Benchmarking is fulfilled with CIFAR-10, EEACL26, and NORB datasets. Initially, series of all admissible versions for allocation of DropOut layers are ...

  11. Finding Neutrinos in LArTPCs using Convolutional Neural Networks

    Science.gov (United States)

    Wongjirad, Taritree

    2017-09-01

    Deep learning algorithms, which have emerged over the last decade, are opening up new ways to analyze data for many particle physics experiments. MicroBooNE, which is a neutrino experiment at Fermilab, has been exploring the use of such algorithms, in particular, convolutional neural networks (CNNS). CNNs are the state-of-the-art method for a large class of problems involving the analysis of images. This makes CNNs an attractive approach for MicroBooNE, whose detector, a liquid argon time projection chamber (LArTPC), produces high-resolution images of particle interactions. In this talk, I will discuss the ways CNNs can be applied to tasks like neutrino interaction detection and particle identification in MicroBooNE and LArTPCs.

  12. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Directory of Open Access Journals (Sweden)

    Huiliang Cao

    2016-01-01

    Full Text Available This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC, Quadrature Force Correction (QFC and Coupling Stiffness Correction (CSC methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  13. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-07

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  14. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-01

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455

  15. CNN-BLPred: a Convolutional neural network based predictor for β-Lactamases (BL) and their classes.

    Science.gov (United States)

    White, Clarence; Ismail, Hamid D; Saigo, Hiroto; Kc, Dukka B

    2017-12-28

    The β-Lactamase (BL) enzyme family is an important class of enzymes that plays a key role in bacterial resistance to antibiotics. As the newly identified number of BL enzymes is increasing daily, it is imperative to develop a computational tool to classify the newly identified BL enzymes into one of its classes. There are two types of classification of BL enzymes: Molecular Classification and Functional Classification. Existing computational methods only address Molecular Classification and the performance of these existing methods is unsatisfactory. We addressed the unsatisfactory performance of the existing methods by implementing a Deep Learning approach called Convolutional Neural Network (CNN). We developed CNN-BLPred, an approach for the classification of BL proteins. The CNN-BLPred uses Gradient Boosted Feature Selection (GBFS) in order to select the ideal feature set for each BL classification. Based on the rigorous benchmarking of CCN-BLPred using both leave-one-out cross-validation and independent test sets, CCN-BLPred performed better than the other existing algorithms. Compared with other architectures of CNN, Recurrent Neural Network, and Random Forest, the simple CNN architecture with only one convolutional layer performs the best. After feature extraction, we were able to remove ~95% of the 10,912 features using Gradient Boosted Trees. During 10-fold cross validation, we increased the accuracy of the classic BL predictions by 7%. We also increased the accuracy of Class A, Class B, Class C, and Class D performance by an average of 25.64%. The independent test results followed a similar trend. We implemented a deep learning algorithm known as Convolutional Neural Network (CNN) to develop a classifier for BL classification. Combined with feature selection on an exhaustive feature set and using balancing method such as Random Oversampling (ROS), Random Undersampling (RUS) and Synthetic Minority Oversampling Technique (SMOTE), CNN-BLPred performs

  16. Dissipative quantum error correction and application to quantum sensing with trapped ions.

    Science.gov (United States)

    Reiter, F; Sørensen, A S; Zoller, P; Muschik, C A

    2017-11-28

    Quantum-enhanced measurements hold the promise to improve high-precision sensing ranging from the definition of time standards to the determination of fundamental constants of nature. However, quantum sensors lose their sensitivity in the presence of noise. To protect them, the use of quantum error-correcting codes has been proposed. Trapped ions are an excellent technological platform for both quantum sensing and quantum error correction. Here we present a quantum error correction scheme that harnesses dissipation to stabilize a trapped-ion qubit. In our approach, always-on couplings to an engineered environment protect the qubit against spin-flips or phase-flips. Our dissipative error correction scheme operates in a continuous manner without the need to perform measurements or feedback operations. We show that the resulting enhanced coherence time translates into a significantly enhanced precision for quantum measurements. Our work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.

  17. 3D Convolutional Neural Network for Automatic Detection of Lung Nodules in Chest CT.

    Science.gov (United States)

    Hamidian, Sardar; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria

    2017-01-01

    Deep convolutional neural networks (CNNs) form the backbone of many state-of-the-art computer vision systems for classification and segmentation of 2D images. The same principles and architectures can be extended to three dimensions to obtain 3D CNNs that are suitable for volumetric data such as CT scans. In this work, we train a 3D CNN for automatic detection of pulmonary nodules in chest CT images using volumes of interest extracted from the LIDC dataset. We then convert the 3D CNN which has a fixed field of view to a 3D fully convolutional network (FCN) which can generate the score map for the entire volume efficiently in a single pass. Compared to the sliding window approach for applying a CNN across the entire input volume, the FCN leads to a nearly 800-fold speed-up, and thereby fast generation of output scores for a single case. This screening FCN is used to generate difficult negative examples that are used to train a new discriminant CNN. The overall system consists of the screening FCN for fast generation of candidate regions of interest, followed by the discrimination CNN.

  18. Atmospheric Error Correction of the Laser Beam Ranging

    Directory of Open Access Journals (Sweden)

    J. Saydi

    2014-01-01

    Full Text Available Atmospheric models based on surface measurements of pressure, temperature, and relative humidity have been used to increase the laser ranging accuracy by ray tracing. Atmospheric refraction can cause significant errors in laser ranging systems. Through the present research, the atmospheric effects on the laser beam were investigated by using the principles of laser ranging. Atmospheric correction was calculated for 0.532, 1.3, and 10.6 micron wavelengths through the weather conditions of Tehran, Isfahan, and Bushehr in Iran since March 2012 to March 2013. Through the present research the atmospheric correction was computed for meteorological data in base of monthly mean. Of course, the meteorological data were received from meteorological stations in Tehran, Isfahan, and Bushehr. Atmospheric correction was calculated for 11, 100, and 200 kilometers laser beam propagations under 30°, 60°, and 90° rising angles for each propagation. The results of the study showed that in the same months and beam emission angles, the atmospheric correction was most accurate for 10.6 micron wavelength. The laser ranging error was decreased by increasing the laser emission angle. The atmospheric correction with two Marini-Murray and Mendes-Pavlis models for 0.532 nm was compared.

  19. Diagnostic Error in Correctional Mental Health: Prevalence, Causes, and Consequences.

    Science.gov (United States)

    Martin, Michael S; Hynes, Katie; Hatcher, Simon; Colman, Ian

    2016-04-01

    While they have important implications for inmates and resourcing of correctional institutions, diagnostic errors are rarely discussed in correctional mental health research. This review seeks to estimate the prevalence of diagnostic errors in prisons and jails and explores potential causes and consequences. Diagnostic errors are defined as discrepancies in an inmate's diagnostic status depending on who is responsible for conducting the assessment and/or the methods used. It is estimated that at least 10% to 15% of all inmates may be incorrectly classified in terms of the presence or absence of a mental illness. Inmate characteristics, relationships with staff, and cognitive errors stemming from the use of heuristics when faced with time constraints are discussed as possible sources of error. A policy example of screening for mental illness at intake to prison is used to illustrate when the risk of diagnostic error might be increased and to explore strategies to mitigate this risk. © The Author(s) 2016.

  20. Optical transmission testing based on asynchronous sampling techniques: images analysis containing chromatic dispersion using convolutional neural network

    Science.gov (United States)

    Mrozek, T.; Perlicki, K.; Tajmajer, T.; Wasilewski, P.

    2017-08-01

    The article presents an image analysis method, obtained from an asynchronous delay tap sampling (ADTS) technique, which is used for simultaneous monitoring of various impairments occurring in the physical layer of the optical network. The ADTS method enables the visualization of the optical signal in the form of characteristics (so called phase portraits) that change their shape under the influence of impairments such as chromatic dispersion, polarization mode dispersion and ASE noise. Using this method, a simulation model was built with OptSim 4.0. After the simulation study, data were obtained in the form of images that were further analyzed using the convolutional neural network algorithm. The main goal of the study was to train a convolutional neural network to recognize the selected impairment (distortion); then to test its accuracy and estimate the impairment for the selected set of test images. The input data consisted of processed binary images in the form of two-dimensional matrices, with the position of the pixel. This article focuses only on the analysis of images containing chromatic dispersion.

  1. Direct cointegration testing in error-correction models

    NARCIS (Netherlands)

    F.R. Kleibergen (Frank); H.K. van Dijk (Herman)

    1994-01-01

    textabstractAbstract An error correction model is specified having only exact identified parameters, some of which reflect a possible departure from a cointegration model. Wald, likelihood ratio, and Lagrange multiplier statistics are derived to test for the significance of these parameters. The

  2. Multi-Level and Multi-Scale Feature Aggregation Using Pretrained Convolutional Neural Networks for Music Auto-Tagging

    Science.gov (United States)

    Lee, Jongpil; Nam, Juhan

    2017-08-01

    Music auto-tagging is often handled in a similar manner to image classification by regarding the 2D audio spectrogram as image data. However, music auto-tagging is distinguished from image classification in that the tags are highly diverse and have different levels of abstractions. Considering this issue, we propose a convolutional neural networks (CNN)-based architecture that embraces multi-level and multi-scaled features. The architecture is trained in three steps. First, we conduct supervised feature learning to capture local audio features using a set of CNNs with different input sizes. Second, we extract audio features from each layer of the pre-trained convolutional networks separately and aggregate them altogether given a long audio clip. Finally, we put them into fully-connected networks and make final predictions of the tags. Our experiments show that using the combination of multi-level and multi-scale features is highly effective in music auto-tagging and the proposed method outperforms previous state-of-the-arts on the MagnaTagATune dataset and the Million Song Dataset. We further show that the proposed architecture is useful in transfer learning.

  3. Aerial Images and Convolutional Neural Network for Cotton Bloom Detection.

    Science.gov (United States)

    Xu, Rui; Li, Changying; Paterson, Andrew H; Jiang, Yu; Sun, Shangpeng; Robertson, Jon S

    2017-01-01

    Monitoring flower development can provide useful information for production management, estimating yield and selecting specific genotypes of crops. The main goal of this study was to develop a methodology to detect and count cotton flowers, or blooms, using color images acquired by an unmanned aerial system. The aerial images were collected from two test fields in 4 days. A convolutional neural network (CNN) was designed and trained to detect cotton blooms in raw images, and their 3D locations were calculated using the dense point cloud constructed from the aerial images with the structure from motion method. The quality of the dense point cloud was analyzed and plots with poor quality were excluded from data analysis. A constrained clustering algorithm was developed to register the same bloom detected from different images based on the 3D location of the bloom. The accuracy and incompleteness of the dense point cloud were analyzed because they affected the accuracy of the 3D location of the blooms and thus the accuracy of the bloom registration result. The constrained clustering algorithm was validated using simulated data, showing good efficiency and accuracy. The bloom count from the proposed method was comparable with the number counted manually with an error of -4 to 3 blooms for the field with a single plant per plot. However, more plots were underestimated in the field with multiple plants per plot due to hidden blooms that were not captured by the aerial images. The proposed methodology provides a high-throughput method to continuously monitor the flowering progress of cotton.

  4. Equation-Method for correcting clipping errors in OFDM signals.

    Science.gov (United States)

    Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry

    2016-01-01

    Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.

  5. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    Science.gov (United States)

    Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.

  6. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    Energy Technology Data Exchange (ETDEWEB)

    Brantjes, N.P.M. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Dzordzhadze, V. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Gebel, R. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Gonnella, F. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Gray, F.E. [Regis University, Denver, CO 80221 (United States); Hoek, D.J. van der [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Imig, A. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Kruithof, W.L. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Lazarus, D.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Lehrach, A.; Lorentz, B. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Messi, R. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Moricciani, D. [INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Morse, W.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Noid, G.A. [Indiana University Cyclotron Facility, Bloomington, IN 47408 (United States); and others

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Juelich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10{sup -5} for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10{sup -6} in a search for an electric dipole moment using a storage ring.

  7. SU-C-207B-07: Deep Convolutional Neural Network Image Matching for Ultrasound Guidance in Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, N; Najafi, M; Hancock, S; Hristov, D [Stanford University Cancer Center, Palo Alto, CA (United States)

    2016-06-15

    Purpose: Robust matching of ultrasound images is a challenging problem as images of the same anatomy often present non-trivial differences. This poses an obstacle for ultrasound guidance in radiotherapy. Thus our objective is to overcome this obstacle by designing and evaluating an image blocks matching framework based on a two channel deep convolutional neural network. Methods: We extend to 3D an algorithmic structure previously introduced for 2D image feature learning [1]. To obtain the similarity between two 3D image blocks A and B, the 3D image blocks are divided into 2D patches Ai and Bi. The similarity is then calculated as the average similarity score of Ai and Bi. The neural network was then trained with public non-medical image pairs, and subsequently evaluated on ultrasound image blocks for the following scenarios: (S1) same image blocks with/without shifts (A and A-shift-x); (S2) non-related random block pairs; (S3) ground truth registration matched pairs of different ultrasound images with/without shifts (A-i and A-reg-i-shift-x). Results: For S1 the similarity scores of A and A-shift-x were 32.63, 18.38, 12.95, 9.23, 2.15 and 0.43 for x=ranging from 0 mm to 10 mm in 2 mm increments. For S2 the average similarity score for non-related block pairs was −1.15. For S3 the average similarity score of ground truth registration matched blocks A-i and A-reg-i-shift-0 (1≤i≤5) was 12.37. After translating A-reg-i-shift-0 by 0 mm, 2 mm, 4 mm, 6 mm, 8 mm, and 10 mm, the average similarity scores of A-i and A-reg-i-shift-x were 11.04, 8.42, 4.56, 2.27, and 0.29 respectively. Conclusion: The proposed method correctly assigns highest similarity to corresponding 3D ultrasound image blocks despite differences in image content and thus can form the basis for ultrasound image registration and tracking.[1] Zagoruyko, Komodakis, “Learning to compare image patches via convolutional neural networks', IEEE CVPR 2015,pp.4353–4361.

  8. SU-C-207B-07: Deep Convolutional Neural Network Image Matching for Ultrasound Guidance in Radiotherapy

    International Nuclear Information System (INIS)

    Zhu, N; Najafi, M; Hancock, S; Hristov, D

    2016-01-01

    Purpose: Robust matching of ultrasound images is a challenging problem as images of the same anatomy often present non-trivial differences. This poses an obstacle for ultrasound guidance in radiotherapy. Thus our objective is to overcome this obstacle by designing and evaluating an image blocks matching framework based on a two channel deep convolutional neural network. Methods: We extend to 3D an algorithmic structure previously introduced for 2D image feature learning [1]. To obtain the similarity between two 3D image blocks A and B, the 3D image blocks are divided into 2D patches Ai and Bi. The similarity is then calculated as the average similarity score of Ai and Bi. The neural network was then trained with public non-medical image pairs, and subsequently evaluated on ultrasound image blocks for the following scenarios: (S1) same image blocks with/without shifts (A and A-shift-x); (S2) non-related random block pairs; (S3) ground truth registration matched pairs of different ultrasound images with/without shifts (A-i and A-reg-i-shift-x). Results: For S1 the similarity scores of A and A-shift-x were 32.63, 18.38, 12.95, 9.23, 2.15 and 0.43 for x=ranging from 0 mm to 10 mm in 2 mm increments. For S2 the average similarity score for non-related block pairs was −1.15. For S3 the average similarity score of ground truth registration matched blocks A-i and A-reg-i-shift-0 (1≤i≤5) was 12.37. After translating A-reg-i-shift-0 by 0 mm, 2 mm, 4 mm, 6 mm, 8 mm, and 10 mm, the average similarity scores of A-i and A-reg-i-shift-x were 11.04, 8.42, 4.56, 2.27, and 0.29 respectively. Conclusion: The proposed method correctly assigns highest similarity to corresponding 3D ultrasound image blocks despite differences in image content and thus can form the basis for ultrasound image registration and tracking.[1] Zagoruyko, Komodakis, “Learning to compare image patches via convolutional neural networks', IEEE CVPR 2015,pp.4353–4361.

  9. Experimental quantum error correction with high fidelity

    International Nuclear Information System (INIS)

    Zhang Jingfu; Gangloff, Dorian; Moussa, Osama; Laflamme, Raymond

    2011-01-01

    More than ten years ago a first step toward quantum error correction (QEC) was implemented [Phys. Rev. Lett. 81, 2152 (1998)]. The work showed there was sufficient control in nuclear magnetic resonance to implement QEC, and demonstrated that the error rate changed from ε to ∼ε 2 . In the current work we reproduce a similar experiment using control techniques that have been since developed, such as the pulses generated by gradient ascent pulse engineering algorithm. We show that the fidelity of the QEC gate sequence and the comparative advantage of QEC are appreciably improved. This advantage is maintained despite the errors introduced by the additional operations needed to protect the quantum states.

  10. Karect: accurate correction of substitution, insertion and deletion errors for next-generation sequencing data

    KAUST Repository

    Allam, Amin

    2015-07-14

    Motivation: Next-generation sequencing generates large amounts of data affected by errors in the form of substitutions, insertions or deletions of bases. Error correction based on the high-coverage information, typically improves de novo assembly. Most existing tools can correct substitution errors only; some support insertions and deletions, but accuracy in many cases is low. Results: We present Karect, a novel error correction technique based on multiple alignment. Our approach supports substitution, insertion and deletion errors. It can handle non-uniform coverage as well as moderately covered areas of the sequenced genome. Experiments with data from Illumina, 454 FLX and Ion Torrent sequencing machines demonstrate that Karect is more accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality of error correction.

  11. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    Science.gov (United States)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  12. Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network.

    Science.gov (United States)

    Urtnasan, Erdenebayar; Park, Jong-Uk; Joo, Eun-Yeon; Lee, Kyoung-Joung

    2018-04-23

    In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F 1 -score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.

  13. An Ensemble Deep Convolutional Neural Network Model with Improved D-S Evidence Fusion for Bearing Fault Diagnosis.

    Science.gov (United States)

    Li, Shaobo; Liu, Guokai; Tang, Xianghong; Lu, Jianguang; Hu, Jianjun

    2017-07-28

    Intelligent machine health monitoring and fault diagnosis are becoming increasingly important for modern manufacturing industries. Current fault diagnosis approaches mostly depend on expert-designed features for building prediction models. In this paper, we proposed IDSCNN, a novel bearing fault diagnosis algorithm based on ensemble deep convolutional neural networks and an improved Dempster-Shafer theory based evidence fusion. The convolutional neural networks take the root mean square (RMS) maps from the FFT (Fast Fourier Transformation) features of the vibration signals from two sensors as inputs. The improved D-S evidence theory is implemented via distance matrix from evidences and modified Gini Index. Extensive evaluations of the IDSCNN on the Case Western Reserve Dataset showed that our IDSCNN algorithm can achieve better fault diagnosis performance than existing machine learning methods by fusing complementary or conflicting evidences from different models and sensors and adapting to different load conditions.

  14. Pattern Recognition of Momentary Mental Workload Based on Multi-Channel Electrophysiological Data and Ensemble Convolutional Neural Networks.

    Science.gov (United States)

    Zhang, Jianhua; Li, Sunan; Wang, Rubin

    2017-01-01

    In this paper, we deal with the Mental Workload (MWL) classification problem based on the measured physiological data. First we discussed the optimal depth (i.e., the number of hidden layers) and parameter optimization algorithms for the Convolutional Neural Networks (CNN). The base CNNs designed were tested according to five classification performance indices, namely Accuracy, Precision, F-measure, G-mean, and required training time. Then we developed an Ensemble Convolutional Neural Network (ECNN) to enhance the accuracy and robustness of the individual CNN model. For the ECNN design, three model aggregation approaches (weighted averaging, majority voting and stacking) were examined and a resampling strategy was used to enhance the diversity of individual CNN models. The results of MWL classification performance comparison indicated that the proposed ECNN framework can effectively improve MWL classification performance and is featured by entirely automatic feature extraction and MWL classification, when compared with traditional machine learning methods.

  15. Phase Diagrams of Three-Dimensional Anderson and Quantum Percolation Models Using Deep Three-Dimensional Convolutional Neural Network

    Science.gov (United States)

    Mano, Tomohiro; Ohtsuki, Tomi

    2017-11-01

    The three-dimensional Anderson model is a well-studied model of disordered electron systems that shows the delocalization-localization transition. As in our previous papers on two- and three-dimensional (2D, 3D) quantum phase transitions [J. Phys. Soc. Jpn. 85, 123706 (2016), 86, 044708 (2017)], we used an image recognition algorithm based on a multilayered convolutional neural network. However, in contrast to previous papers in which 2D image recognition was used, we applied 3D image recognition to analyze entire 3D wave functions. We show that a full phase diagram of the disorder-energy plane is obtained once the 3D convolutional neural network has been trained at the band center. We further demonstrate that the full phase diagram for 3D quantum bond and site percolations can be drawn by training the 3D Anderson model at the band center.

  16. Fully Convolutional Network Based Shadow Extraction from GF-2 Imagery

    Science.gov (United States)

    Li, Z.; Cai, G.; Ren, H.

    2018-04-01

    There are many shadows on the high spatial resolution satellite images, especially in the urban areas. Although shadows on imagery severely affect the information extraction of land cover or land use, they provide auxiliary information for building extraction which is hard to achieve a satisfactory accuracy through image classification itself. This paper focused on the method of building shadow extraction by designing a fully convolutional network and training samples collected from GF-2 satellite imagery in the urban region of Changchun city. By means of spatial filtering and calculation of adjacent relationship along the sunlight direction, the small patches from vegetation or bridges have been eliminated from the preliminary extracted shadows. Finally, the building shadows were separated. The extracted building shadow information from the proposed method in this paper was compared with the results from the traditional object-oriented supervised classification algorihtms. It showed that the deep learning network approach can improve the accuracy to a large extent.

  17. Knowledge-guided golf course detection using a convolutional neural network fine-tuned on temporally augmented data

    Science.gov (United States)

    Chen, Jingbo; Wang, Chengyi; Yue, Anzhi; Chen, Jiansheng; He, Dongxu; Zhang, Xiuyan

    2017-10-01

    The tremendous success of deep learning models such as convolutional neural networks (CNNs) in computer vision provides a method for similar problems in the field of remote sensing. Although research on repurposing pretrained CNN to remote sensing tasks is emerging, the scarcity of labeled samples and the complexity of remote sensing imagery still pose challenges. We developed a knowledge-guided golf course detection approach using a CNN fine-tuned on temporally augmented data. The proposed approach is a combination of knowledge-driven region proposal, data-driven detection based on CNN, and knowledge-driven postprocessing. To confront data complexity, knowledge-derived cooccurrence, composition, and area-based rules are applied sequentially to propose candidate golf regions. To confront sample scarcity, we employed data augmentation in the temporal domain, which extracts samples from multitemporal images. The augmented samples were then used to fine-tune a pretrained CNN for golf detection. Finally, commission error was further suppressed by postprocessing. Experiments conducted on GF-1 imagery prove the effectiveness of the proposed approach.

  18. Processing of chromatic information in a deep convolutional neural network.

    Science.gov (United States)

    Flachot, Alban; Gegenfurtner, Karl R

    2018-04-01

    Deep convolutional neural networks are a class of machine-learning algorithms capable of solving non-trivial tasks, such as object recognition, with human-like performance. Little is known about the exact computations that deep neural networks learn, and to what extent these computations are similar to the ones performed by the primate brain. Here, we investigate how color information is processed in the different layers of the AlexNet deep neural network, originally trained on object classification of over 1.2M images of objects in their natural contexts. We found that the color-responsive units in the first layer of AlexNet learned linear features and were broadly tuned to two directions in color space, analogously to what is known of color responsive cells in the primate thalamus. Moreover, these directions are decorrelated and lead to statistically efficient representations, similar to the cardinal directions of the second-stage color mechanisms in primates. We also found, in analogy to the early stages of the primate visual system, that chromatic and achromatic information were segregated in the early layers of the network. Units in the higher layers of AlexNet exhibit on average a lower responsivity for color than units at earlier stages.

  19. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...

  20. Change detection in multitemporal synthetic aperture radar images using dual-channel convolutional neural network

    Science.gov (United States)

    Liu, Tao; Li, Ying; Cao, Ying; Shen, Qiang

    2017-10-01

    This paper proposes a model of dual-channel convolutional neural network (CNN) that is designed for change detection in SAR images, in an effort to acquire higher detection accuracy and lower misclassification rate. This network model contains two parallel CNN channels, which can extract deep features from two multitemporal SAR images. For comparison and validation, the proposed method is tested along with other change detection algorithms on both simulated SAR images and real-world SAR images captured by different sensors. The experimental results demonstrate that the presented method outperforms the state-of-the-art techniques by a considerable margin.

  1. Ear Detection under Uncontrolled Conditions with Multiple Scale Faster Region-Based Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Yi Zhang

    2017-04-01

    Full Text Available Ear detection is an important step in ear recognition approaches. Most existing ear detection techniques are based on manually designing features or shallow learning algorithms. However, researchers found that the pose variation, occlusion, and imaging conditions provide a great challenge to the traditional ear detection methods under uncontrolled conditions. This paper proposes an efficient technique involving Multiple Scale Faster Region-based Convolutional Neural Networks (Faster R-CNN to detect ears from 2D profile images in natural images automatically. Firstly, three regions of different scales are detected to infer the information about the ear location context within the image. Then an ear region filtering approach is proposed to extract the correct ear region and eliminate the false positives automatically. In an experiment with a test set of 200 web images (with variable photographic conditions, 98% of ears were accurately detected. Experiments were likewise conducted on the Collection J2 of University of Notre Dame Biometrics Database (UND-J2 and University of Beira Interior Ear dataset (UBEAR, which contain large occlusion, scale, and pose variations. Detection rates of 100% and 98.22%, respectively, demonstrate the effectiveness of the proposed approach.

  2. Error-correction coding and decoding bounds, codes, decoders, analysis and applications

    CERN Document Server

    Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak

    2017-01-01

    This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...

  3. Cerebral vessels segmentation for light-sheet microscopy image using convolutional neural networks

    Science.gov (United States)

    Hu, Chaoen; Hui, Hui; Wang, Shuo; Dong, Di; Liu, Xia; Yang, Xin; Tian, Jie

    2017-03-01

    Cerebral vessel segmentation is an important step in image analysis for brain function and brain disease studies. To extract all the cerebrovascular patterns, including arteries and capillaries, some filter-based methods are used to segment vessels. However, the design of accurate and robust vessel segmentation algorithms is still challenging, due to the variety and complexity of images, especially in cerebral blood vessel segmentation. In this work, we addressed a problem of automatic and robust segmentation of cerebral micro-vessels structures in cerebrovascular images acquired by light-sheet microscope for mouse. To segment micro-vessels in large-scale image data, we proposed a convolutional neural networks (CNNs) architecture trained by 1.58 million pixels with manual label. Three convolutional layers and one fully connected layer were used in the CNNs model. We extracted a patch of size 32x32 pixels in each acquired brain vessel image as training data set to feed into CNNs for classification. This network was trained to output the probability that the center pixel of input patch belongs to vessel structures. To build the CNNs architecture, a series of mouse brain vascular images acquired from a commercial light sheet fluorescence microscopy (LSFM) system were used for training the model. The experimental results demonstrated that our approach is a promising method for effectively segmenting micro-vessels structures in cerebrovascular images with vessel-dense, nonuniform gray-level and long-scale contrast regions.

  4. StegNet: Mega Image Steganography Capacity with Deep Convolutional Network

    Directory of Open Access Journals (Sweden)

    Pin Wu

    2018-06-01

    Full Text Available Traditional image steganography often leans interests towards safely embedding hidden information into cover images with payload capacity almost neglected. This paper combines recent deep convolutional neural network methods with image-into-image steganography. It successfully hides the same size images with a decoding rate of 98.2% or bpp (bits per pixel of 23.57 by changing only 0.76% of the cover image on average. Our method directly learns end-to-end mappings between the cover image and the embedded image and between the hidden image and the decoded image. We further show that our embedded image, while with mega payload capacity, is still robust to statistical analysis.

  5. FMLRC: Hybrid long read error correction using an FM-index.

    Science.gov (United States)

    Wang, Jeremy R; Holt, James; McMillan, Leonard; Jones, Corbin D

    2018-02-09

    Long read sequencing is changing the landscape of genomic research, especially de novo assembly. Despite the high error rate inherent to long read technologies, increased read lengths dramatically improve the continuity and accuracy of genome assemblies. However, the cost and throughput of these technologies limits their application to complex genomes. One solution is to decrease the cost and time to assemble novel genomes by leveraging "hybrid" assemblies that use long reads for scaffolding and short reads for accuracy. We describe a novel method leveraging a multi-string Burrows-Wheeler Transform with auxiliary FM-index to correct errors in long read sequences using a set of complementary short reads. We demonstrate that our method efficiently produces significantly more high quality corrected sequence than existing hybrid error-correction methods. We also show that our method produces more contiguous assemblies, in many cases, than existing state-of-the-art hybrid and long-read only de novo assembly methods. Our method accurately corrects long read sequence data using complementary short reads. We demonstrate higher total throughput of corrected long reads and a corresponding increase in contiguity of the resulting de novo assemblies. Improved throughput and computational efficiency than existing methods will help better economically utilize emerging long read sequencing technologies.

  6. Microaneurysm detection using fully convolutional neural networks.

    Science.gov (United States)

    Chudzik, Piotr; Majumdar, Somshubra; Calivá, Francesco; Al-Diri, Bashir; Hunter, Andrew

    2018-05-01

    Diabetic retinopathy is a microvascular complication of diabetes that can lead to sight loss if treated not early enough. Microaneurysms are the earliest clinical signs of diabetic retinopathy. This paper presents an automatic method for detecting microaneurysms in fundus photographies. A novel patch-based fully convolutional neural network with batch normalization layers and Dice loss function is proposed. Compared to other methods that require up to five processing stages, it requires only three. Furthermore, to the best of the authors' knowledge, this is the first paper that shows how to successfully transfer knowledge between datasets in the microaneurysm detection domain. The proposed method was evaluated using three publicly available and widely used datasets: E-Ophtha, DIARETDB1, and ROC. It achieved better results than state-of-the-art methods using the FROC metric. The proposed algorithm accomplished highest sensitivities for low false positive rates, which is particularly important for screening purposes. Performance, simplicity, and robustness of the proposed method demonstrates its suitability for diabetic retinopathy screening applications. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Real Time Eye Detector with Cascaded Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Bin Li

    2018-01-01

    Full Text Available An accurate and efficient eye detector is essential for many computer vision applications. In this paper, we present an efficient method to evaluate the eye location from facial images. First, a group of candidate regions with regional extreme points is quickly proposed; then, a set of convolution neural networks (CNNs is adopted to determine the most likely eye region and classify the region as left or right eye; finally, the center of the eye is located with other CNNs. In the experiments using GI4E, BioID, and our datasets, our method attained a detection accuracy which is comparable to existing state-of-the-art methods; meanwhile, our method was faster and adaptable to variations of the images, including external light changes, facial occlusion, and changes in image modality.

  8. Corpus-Based Websites to Promote Learner Autonomy in Correcting Writing Collocation Errors

    Directory of Open Access Journals (Sweden)

    Pham Thuy Dung

    2016-12-01

    Full Text Available The recent yet powerful emergence of E-learning and using online resources in learning EFL (English as a Foreign Language has helped promote learner autonomy in language acquisition including self-correcting their mistakes. This pilot study despite conducted on a modest sample of 25 second year students majoring in Business English at Hanoi Foreign Trade University is an initial attempt to investigate the feasibility of using corpus-based websites to promote learner autonomy in correcting collocation errors in EFL writing. The data is collected using a pre-questionnaire and a post-interview aiming to find out the participants’ change in belief and attitude toward learner autonomy in collocation errors in writing, the extent of their success in using the corpus-based websites to self-correct the errors and the change in their confidence in self-correcting the errors using the websites. The findings show that a significant majority of students have shifted their belief and attitude toward a more autonomous mode of learning, enjoyed a fair success of using the websites to self-correct the errors and become more confident. The study also yields an implication that a face-to-face training of how to use these online tools is vital to the later confidence and success of the learners

  9. Deep multi-scale location-aware 3D convolutional neural networks for automated detection of lacunes of presumed vascular origin

    Directory of Open Access Journals (Sweden)

    Mohsen Ghafoorian

    2017-01-01

    In this paper, we propose an automated two-stage method using deep convolutional neural networks (CNN. We show that this method has good performance and can considerably benefit readers. We first use a fully convolutional neural network to detect initial candidates. In the second step, we employ a 3D CNN as a false positive reduction tool. As the location information is important to the analysis of candidate structures, we further equip the network with contextual information using multi-scale analysis and integration of explicit location features. We trained, validated and tested our networks on a large dataset of 1075 cases obtained from two different studies. Subsequently, we conducted an observer study with four trained observers and compared our method with them using a free-response operating characteristic analysis. Shown on a test set of 111 cases, the resulting CAD system exhibits performance similar to the trained human observers and achieves a sensitivity of 0.974 with 0.13 false positives per slice. A feasibility study also showed that a trained human observer would considerably benefit once aided by the CAD system.

  10. Vision-Based Fall Detection with Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Adrián Núñez-Marcos

    2017-01-01

    Full Text Available One of the biggest challenges in modern societies is the improvement of healthy aging and the support to older persons in their daily activities. In particular, given its social and economic impact, the automatic detection of falls has attracted considerable attention in the computer vision and pattern recognition communities. Although the approaches based on wearable sensors have provided high detection rates, some of the potential users are reluctant to wear them and thus their use is not yet normalized. As a consequence, alternative approaches such as vision-based methods have emerged. We firmly believe that the irruption of the Smart Environments and the Internet of Things paradigms, together with the increasing number of cameras in our daily environment, forms an optimal context for vision-based systems. Consequently, here we propose a vision-based solution using Convolutional Neural Networks to decide if a sequence of frames contains a person falling. To model the video motion and make the system scenario independent, we use optical flow images as input to the networks followed by a novel three-step training phase. Furthermore, our method is evaluated in three public datasets achieving the state-of-the-art results in all three of them.

  11. PARTICLE SWARM OPTIMIZATION (PSO FOR TRAINING OPTIMIZATION ON CONVOLUTIONAL NEURAL NETWORK (CNN

    Directory of Open Access Journals (Sweden)

    Arie Rachmad Syulistyo

    2016-02-01

    Full Text Available Neural network attracts plenty of researchers lately. Substantial number of renowned universities have developed neural network for various both academically and industrially applications. Neural network shows considerable performance on various purposes. Nevertheless, for complex applications, neural network’s accuracy significantly deteriorates. To tackle the aforementioned drawback, lot of researches had been undertaken on the improvement of the standard neural network. One of the most promising modifications on standard neural network for complex applications is deep learning method. In this paper, we proposed the utilization of Particle Swarm Optimization (PSO in Convolutional Neural Networks (CNNs, which is one of the basic methods in deep learning. The use of PSO on the training process aims to optimize the results of the solution vectors on CNN in order to improve the recognition accuracy. The data used in this research is handwritten digit from MNIST. The experiments exhibited that the accuracy can be attained in 4 epoch is 95.08%. This result was better than the conventional CNN and DBN.  The execution time was also almost similar to the conventional CNN. Therefore, the proposed method was a promising method.

  12. A Convolution-LSTM-Based Deep Neural Network for Cross-Domain MOOC Forum Post Classification

    Directory of Open Access Journals (Sweden)

    Xiaocong Wei

    2017-07-01

    Full Text Available Learners in a massive open online course often express feelings, exchange ideas and seek help by posting questions in discussion forums. Due to the very high learner-to-instructor ratios, it is unrealistic to expect instructors to adequately track the forums, find all of the issues that need resolution and understand their urgency and sentiment. In this paper, considering the biases among different courses, we propose a transfer learning framework based on a convolutional neural network and a long short-term memory model, called ConvL, to automatically identify whether a post expresses confusion, determine the urgency and classify the polarity of the sentiment. First, we learn the feature representation for each word by considering the local contextual feature via the convolution operation. Second, we learn the post representation from the features extracted through the convolution operation via the LSTM model, which considers the long-term temporal semantic relationships of features. Third, we investigate the possibility of transferring parameters from a model trained on one course to another course and the subsequent fine-tuning. Experiments on three real-world MOOC courses confirm the effectiveness of our framework. This work suggests that our model can potentially significantly increase the effectiveness of monitoring MOOC forums in real time.

  13. Psoriasis skin biopsy image segmentation using Deep Convolutional Neural Network.

    Science.gov (United States)

    Pal, Anabik; Garain, Utpal; Chandra, Aditi; Chatterjee, Raghunath; Senapati, Swapan

    2018-06-01

    Development of machine assisted tools for automatic analysis of psoriasis skin biopsy image plays an important role in clinical assistance. Development of automatic approach for accurate segmentation of psoriasis skin biopsy image is the initial prerequisite for developing such system. However, the complex cellular structure, presence of imaging artifacts, uneven staining variation make the task challenging. This paper presents a pioneering attempt for automatic segmentation of psoriasis skin biopsy images. Several deep neural architectures are tried for segmenting psoriasis skin biopsy images. Deep models are used for classifying the super-pixels generated by Simple Linear Iterative Clustering (SLIC) and the segmentation performance of these architectures is compared with the traditional hand-crafted feature based classifiers built on popularly used classifiers like K-Nearest Neighbor (KNN), Support Vector Machine (SVM) and Random Forest (RF). A U-shaped Fully Convolutional Neural Network (FCN) is also used in an end to end learning fashion where input is the original color image and the output is the segmentation class map for the skin layers. An annotated real psoriasis skin biopsy image data set of ninety (90) images is developed and used for this research. The segmentation performance is evaluated with two metrics namely, Jaccard's Coefficient (JC) and the Ratio of Correct Pixel Classification (RCPC) accuracy. The experimental results show that the CNN based approaches outperform the traditional hand-crafted feature based classification approaches. The present research shows that practical system can be developed for machine assisted analysis of psoriasis disease. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Bias correction of bounded location errors in presence-only data

    Science.gov (United States)

    Hefley, Trevor J.; Brost, Brian M.; Hooten, Mevin B.

    2017-01-01

    Location error occurs when the true location is different than the reported location. Because habitat characteristics at the true location may be different than those at the reported location, ignoring location error may lead to unreliable inference concerning species–habitat relationships.We explain how a transformation known in the spatial statistics literature as a change of support (COS) can be used to correct for location errors when the true locations are points with unknown coordinates contained within arbitrary shaped polygons.We illustrate the flexibility of the COS by modelling the resource selection of Whooping Cranes (Grus americana) using citizen contributed records with locations that were reported with error. We also illustrate the COS with a simulation experiment.In our analysis of Whooping Crane resource selection, we found that location error can result in up to a five-fold change in coefficient estimates. Our simulation study shows that location error can result in coefficient estimates that have the wrong sign, but a COS can efficiently correct for the bias.

  15. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands.

    Science.gov (United States)

    Atzori, Manfredo; Cognolato, Matteo; Müller, Henning

    2016-01-01

    Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too.

  16. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands

    Science.gov (United States)

    Atzori, Manfredo; Cognolato, Matteo; Müller, Henning

    2016-01-01

    Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too. PMID:27656140

  17. Single-shot T2 mapping using overlapping-echo detachment planar imaging and a deep convolutional neural network.

    Science.gov (United States)

    Cai, Congbo; Wang, Chao; Zeng, Yiqing; Cai, Shuhui; Liang, Dong; Wu, Yawen; Chen, Zhong; Ding, Xinghao; Zhong, Jianhui

    2018-04-24

    An end-to-end deep convolutional neural network (CNN) based on deep residual network (ResNet) was proposed to efficiently reconstruct reliable T 2 mapping from single-shot overlapping-echo detachment (OLED) planar imaging. The training dataset was obtained from simulations that were carried out on SPROM (Simulation with PRoduct Operator Matrix) software developed by our group. The relationship between the original OLED image containing two echo signals and the corresponding T 2 mapping was learned by ResNet training. After the ResNet was trained, it was applied to reconstruct the T 2 mapping from simulation and in vivo human brain data. Although the ResNet was trained entirely on simulated data, the trained network was generalized well to real human brain data. The results from simulation and in vivo human brain experiments show that the proposed method significantly outperforms the echo-detachment-based method. Reliable T 2 mapping with higher accuracy is achieved within 30 ms after the network has been trained, while the echo-detachment-based OLED reconstruction method took approximately 2 min. The proposed method will facilitate real-time dynamic and quantitative MR imaging via OLED sequence, and deep convolutional neural network has the potential to reconstruct maps from complex MRI sequences efficiently. © 2018 International Society for Magnetic Resonance in Medicine.

  18. Is a genome a codeword of an error-correcting code?

    Directory of Open Access Journals (Sweden)

    Luzinete C B Faria

    Full Text Available Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction.

  19. Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata

    Science.gov (United States)

    Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.

    2012-01-01

    Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).

  20. Multiobjective optimization framework for landmark measurement error correction in three-dimensional cephalometric tomography.

    Science.gov (United States)

    DeCesare, A; Secanell, M; Lagravère, M O; Carey, J

    2013-01-01

    The purpose of this study is to minimize errors that occur when using a four vs six landmark superimpositioning method in the cranial base to define the co-ordinate system. Cone beam CT volumetric data from ten patients were used for this study. Co-ordinate system transformations were performed. A co-ordinate system was constructed using two planes defined by four anatomical landmarks located by an orthodontist. A second co-ordinate system was constructed using four anatomical landmarks that are corrected using a numerical optimization algorithm for any landmark location operator error using information from six landmarks. The optimization algorithm minimizes the relative distance and angle between the known fixed points in the two images to find the correction. Measurement errors and co-ordinates in all axes were obtained for each co-ordinate system. Significant improvement is observed after using the landmark correction algorithm to position the final co-ordinate system. The errors found in a previous study are significantly reduced. Errors found were between 1 mm and 2 mm. When analysing real patient data, it was found that the 6-point correction algorithm reduced errors between images and increased intrapoint reliability. A novel method of optimizing the overlay of three-dimensional images using a 6-point correction algorithm was introduced and examined. This method demonstrated greater reliability and reproducibility than the previous 4-point correction algorithm.

  1. Classification of teeth in cone-beam CT using deep convolutional neural network.

    Science.gov (United States)

    Miki, Yuma; Muramatsu, Chisako; Hayashi, Tatsuro; Zhou, Xiangrong; Hara, Takeshi; Katsumata, Akitoshi; Fujita, Hiroshi

    2017-01-01

    Dental records play an important role in forensic identification. To this end, postmortem dental findings and teeth conditions are recorded in a dental chart and compared with those of antemortem records. However, most dentists are inexperienced at recording the dental chart for corpses, and it is a physically and mentally laborious task, especially in large scale disasters. Our goal is to automate the dental filing process by using dental x-ray images. In this study, we investigated the application of a deep convolutional neural network (DCNN) for classifying tooth types on dental cone-beam computed tomography (CT) images. Regions of interest (ROIs) including single teeth were extracted from CT slices. Fifty two CT volumes were randomly divided into 42 training and 10 test cases, and the ROIs obtained from the training cases were used for training the DCNN. For examining the sampling effect, random sampling was performed 3 times, and training and testing were repeated. We used the AlexNet network architecture provided in the Caffe framework, which consists of 5 convolution layers, 3 pooling layers, and 2 full connection layers. For reducing the overtraining effect, we augmented the data by image rotation and intensity transformation. The test ROIs were classified into 7 tooth types by the trained network. The average classification accuracy using the augmented training data by image rotation and intensity transformation was 88.8%. Compared with the result without data augmentation, data augmentation resulted in an approximately 5% improvement in classification accuracy. This indicates that the further improvement can be expected by expanding the CT dataset. Unlike the conventional methods, the proposed method is advantageous in obtaining high classification accuracy without the need for precise tooth segmentation. The proposed tooth classification method can be useful in automatic filing of dental charts for forensic identification. Copyright © 2016 Elsevier Ltd

  2. The Use of Three-Dimensional Convolutional Neural Networks to Interpret LiDAR for Forest Inventory

    Directory of Open Access Journals (Sweden)

    Elias Ayrey

    2018-04-01

    Full Text Available As light detection and ranging (LiDAR technology becomes more available, it has become common to use these datasets to generate remotely sensed forest inventories across landscapes. Traditional methods for generating these inventories employ the use of height and proportion metrics to measure LiDAR returns and relate these back to field data using predictive models. Here, we employ a three-dimensional convolutional neural network (CNN, a deep learning technique that scans the LiDAR data and automatically generates useful features for predicting forest attributes. We test the accuracy in estimating forest attributes using the three-dimensional implementations of different CNN models commonly used in the field of image recognition. Using the best performing model architecture, we compared CNN performance to models developed using traditional height metrics. The results of this comparison show that CNNs produced 12% less prediction error when estimating biomass, 6% less in estimating tree count, and 2% less when estimating the percentage of needleleaf trees. We conclude that using CNNs can be a more accurate means of interpreting LiDAR data for forest inventories compared to standard approaches.

  3. Quantum Information Processing and Quantum Error Correction An Engineering Approach

    CERN Document Server

    Djordjevic, Ivan

    2012-01-01

    Quantum Information Processing and Quantum Error Correction is a self-contained, tutorial-based introduction to quantum information, quantum computation, and quantum error-correction. Assuming no knowledge of quantum mechanics and written at an intuitive level suitable for the engineer, the book gives all the essential principles needed to design and implement quantum electronic and photonic circuits. Numerous examples from a wide area of application are given to show how the principles can be implemented in practice. This book is ideal for the electronics, photonics and computer engineer

  4. Convolutional neural network regression for short-axis left ventricle segmentation in cardiac cine MR sequences.

    Science.gov (United States)

    Tan, Li Kuo; Liew, Yih Miin; Lim, Einly; McLaughlin, Robert A

    2017-07-01

    Automated left ventricular (LV) segmentation is crucial for efficient quantification of cardiac function and morphology to aid subsequent management of cardiac pathologies. In this paper, we parameterize the complete (all short axis slices and phases) LV segmentation task in terms of the radial distances between the LV centerpoint and the endo- and epicardial contours in polar space. We then utilize convolutional neural network regression to infer these parameters. Utilizing parameter regression, as opposed to conventional pixel classification, allows the network to inherently reflect domain-specific physical constraints. We have benchmarked our approach primarily against the publicly-available left ventricle segmentation challenge (LVSC) dataset, which consists of 100 training and 100 validation cardiac MRI cases representing a heterogeneous mix of cardiac pathologies and imaging parameters across multiple centers. Our approach attained a .77 Jaccard index, which is the highest published overall result in comparison to other automated algorithms. To test general applicability, we also evaluated against the Kaggle Second Annual Data Science Bowl, where the evaluation metric was the indirect clinical measures of LV volume rather than direct myocardial contours. Our approach attained a Continuous Ranked Probability Score (CRPS) of .0124, which would have ranked tenth in the original challenge. With this we demonstrate the effectiveness of convolutional neural network regression paired with domain-specific features in clinical segmentation. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Residual Shuffling Convolutional Neural Networks for Deep Semantic Image Segmentation Using Multi-Modal Data

    Science.gov (United States)

    Chen, K.; Weinmann, M.; Gao, X.; Yan, M.; Hinz, S.; Jutzi, B.; Weinmann, M.

    2018-05-01

    In this paper, we address the deep semantic segmentation of aerial imagery based on multi-modal data. Given multi-modal data composed of true orthophotos and the corresponding Digital Surface Models (DSMs), we extract a variety of hand-crafted radiometric and geometric features which are provided separately and in different combinations as input to a modern deep learning framework. The latter is represented by a Residual Shuffling Convolutional Neural Network (RSCNN) combining the characteristics of a Residual Network with the advantages of atrous convolution and a shuffling operator to achieve a dense semantic labeling. Via performance evaluation on a benchmark dataset, we analyze the value of different feature sets for the semantic segmentation task. The derived results reveal that the use of radiometric features yields better classification results than the use of geometric features for the considered dataset. Furthermore, the consideration of data on both modalities leads to an improvement of the classification results. However, the derived results also indicate that the use of all defined features is less favorable than the use of selected features. Consequently, data representations derived via feature extraction and feature selection techniques still provide a gain if used as the basis for deep semantic segmentation.

  6. Spot detection in microscopy images using Convolutional Neural Network with sliding-window approach

    CSIR Research Space (South Africa)

    Mabaso, Matsilele A

    2018-01-01

    Full Text Available stream_source_info Mabaso_20271_2018.pdf.txt stream_content_type text/plain stream_size 24351 Content-Encoding UTF-8 stream_name Mabaso_20271_2018.pdf.txt Content-Type text/plain; charset=UTF-8 Spot Detection....n. Krizhevsky, A., Sutskever, I. & Hinton, G. E., 2012. Imagenet classication with deep convolutional neural networks. s.l., s.n., pp. 1-9. Li, R. et al., 2014. Deep learning based imaging data completion for improved brain disease diagnosis. Quebec City, s...

  7. A deep convolutional neural network for recognizing foods

    Science.gov (United States)

    Jahani Heravi, Elnaz; Habibi Aghdam, Hamed; Puig, Domenec

    2015-12-01

    Controlling the food intake is an efficient way that each person can undertake to tackle the obesity problem in countries worldwide. This is achievable by developing a smartphone application that is able to recognize foods and compute their calories. State-of-art methods are chiefly based on hand-crafted feature extraction methods such as HOG and Gabor. Recent advances in large-scale object recognition datasets such as ImageNet have revealed that deep Convolutional Neural Networks (CNN) possess more representation power than the hand-crafted features. The main challenge with CNNs is to find the appropriate architecture for each problem. In this paper, we propose a deep CNN which consists of 769; 988 parameters. Our experiments show that the proposed CNN outperforms the state-of-art methods and improves the best result of traditional methods 17%. Moreover, using an ensemble of two CNNs that have been trained two different times, we are able to improve the classification performance 21:5%.

  8. Finger vein recognition based on convolutional neural network

    Directory of Open Access Journals (Sweden)

    Meng Gesi

    2017-01-01

    Full Text Available Biometric Authentication Technology has been widely used in this information age. As one of the most important technology of authentication, finger vein recognition attracts our attention because of its high security, reliable accuracy and excellent performance. However, the current finger vein recognition system is difficult to be applied widely because its complicated image pre-processing and not representative feature vectors. To solve this problem, a finger vein recognition method based on the convolution neural network (CNN is proposed in the paper. The image samples are directly input into the CNN model to extract its feature vector so that we can make authentication by comparing the Euclidean distance between these vectors. Finally, the Deep Learning Framework Caffe is adopted to verify this method. The result shows that there are great improvements in both speed and accuracy rate compared to the previous research. And the model has nice robustness in illumination and rotation.

  9. Deep learning with convolutional neural network in radiology.

    Science.gov (United States)

    Yasaka, Koichiro; Akai, Hiroyuki; Kunimatsu, Akira; Kiryu, Shigeru; Abe, Osamu

    2018-04-01

    Deep learning with a convolutional neural network (CNN) is gaining attention recently for its high performance in image recognition. Images themselves can be utilized in a learning process with this technique, and feature extraction in advance of the learning process is not required. Important features can be automatically learned. Thanks to the development of hardware and software in addition to techniques regarding deep learning, application of this technique to radiological images for predicting clinically useful information, such as the detection and the evaluation of lesions, etc., are beginning to be investigated. This article illustrates basic technical knowledge regarding deep learning with CNNs along the actual course (collecting data, implementing CNNs, and training and testing phases). Pitfalls regarding this technique and how to manage them are also illustrated. We also described some advanced topics of deep learning, results of recent clinical studies, and the future directions of clinical application of deep learning techniques.

  10. Fully automatic detection and segmentation of abdominal aortic thrombus in post-operative CTA images using Deep Convolutional Neural Networks.

    Science.gov (United States)

    López-Linares, Karen; Aranjuelo, Nerea; Kabongo, Luis; Maclair, Gregory; Lete, Nerea; Ceresa, Mario; García-Familiar, Ainhoa; Macía, Iván; González Ballester, Miguel A

    2018-05-01

    Computerized Tomography Angiography (CTA) based follow-up of Abdominal Aortic Aneurysms (AAA) treated with Endovascular Aneurysm Repair (EVAR) is essential to evaluate the progress of the patient and detect complications. In this context, accurate quantification of post-operative thrombus volume is required. However, a proper evaluation is hindered by the lack of automatic, robust and reproducible thrombus segmentation algorithms. We propose a new fully automatic approach based on Deep Convolutional Neural Networks (DCNN) for robust and reproducible thrombus region of interest detection and subsequent fine thrombus segmentation. The DetecNet detection network is adapted to perform region of interest extraction from a complete CTA and a new segmentation network architecture, based on Fully Convolutional Networks and a Holistically-Nested Edge Detection Network, is presented. These networks are trained, validated and tested in 13 post-operative CTA volumes of different patients using a 4-fold cross-validation approach to provide more robustness to the results. Our pipeline achieves a Dice score of more than 82% for post-operative thrombus segmentation and provides a mean relative volume difference between ground truth and automatic segmentation that lays within the experienced human observer variance without the need of human intervention in most common cases. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Convolutional Neural Network Based on Extreme Learning Machine for Maritime Ships Recognition in Infrared Images.

    Science.gov (United States)

    Khellal, Atmane; Ma, Hongbin; Fei, Qing

    2018-05-09

    The success of Deep Learning models, notably convolutional neural networks (CNNs), makes them the favorable solution for object recognition systems in both visible and infrared domains. However, the lack of training data in the case of maritime ships research leads to poor performance due to the problem of overfitting. In addition, the back-propagation algorithm used to train CNN is very slow and requires tuning many hyperparameters. To overcome these weaknesses, we introduce a new approach fully based on Extreme Learning Machine (ELM) to learn useful CNN features and perform a fast and accurate classification, which is suitable for infrared-based recognition systems. The proposed approach combines an ELM based learning algorithm to train CNN for discriminative features extraction and an ELM based ensemble for classification. The experimental results on VAIS dataset, which is the largest dataset of maritime ships, confirm that the proposed approach outperforms the state-of-the-art models in term of generalization performance and training speed. For instance, the proposed model is up to 950 times faster than the traditional back-propagation based training of convolutional neural networks, primarily for low-level features extraction.

  12. Ciliates learn to diagnose and correct classical error syndromes in mating strategies.

    Science.gov (United States)

    Clark, Kevin B

    2013-01-01

    Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by "rivals" and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via "power" or "refrigeration" cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social

  13. Ciliates learn to diagnose and correct classical error syndromes in mating strategies

    Directory of Open Access Journals (Sweden)

    Kevin Bradley Clark

    2013-08-01

    Full Text Available Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by rivals and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via power or refrigeration cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and nonmodal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in

  14. A Case for Soft Error Detection and Correction in Computational Chemistry.

    Science.gov (United States)

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A

    2013-09-10

    High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.

  15. Bilinear Convolutional Neural Networks for Fine-grained Visual Recognition.

    Science.gov (United States)

    Lin, Tsung-Yu; RoyChowdhury, Aruni; Maji, Subhransu

    2017-07-04

    We present a simple and effective architecture for fine-grained recognition called Bilinear Convolutional Neural Networks (B-CNNs). These networks represent an image as a pooled outer product of features derived from two CNNs and capture localized feature interactions in a translationally invariant manner. B-CNNs are related to orderless texture representations built on deep features but can be trained in an end-to-end manner. Our most accurate model obtains 84.1%, 79.4%, 84.5% and 91.3% per-image accuracy on the Caltech-UCSD birds [66], NABirds [63], FGVC aircraft [42], and Stanford cars [33] dataset respectively and runs at 30 frames-per-second on a NVIDIA Titan X GPU. We then present a systematic analysis of these networks and show that (1) the bilinear features are highly redundant and can be reduced by an order of magnitude in size without significant loss in accuracy, (2) are also effective for other image classification tasks such as texture and scene recognition, and (3) can be trained from scratch on the ImageNet dataset offering consistent improvements over the baseline architecture. Finally, we present visualizations of these models on various datasets using top activations of neural units and gradient-based inversion techniques. The source code for the complete system is available at http://vis-www.cs.umass.edu/bcnn.

  16. Super-resolution using a light inception layer in convolutional neural network

    Science.gov (United States)

    Mou, Qinyang; Guo, Jun

    2018-04-01

    Recently, several models based on CNN architecture have achieved great result on Single Image Super-Resolution (SISR) problem. In this paper, we propose an image super-resolution method (SR) using a light inception layer in convolutional network (LICN). Due to the strong representation ability of our well-designed inception layer that can learn richer representation with less parameters, we can build our model with shallow architecture that can reduce the effect of vanishing gradients problem and save computational costs. Our model strike a balance between computational speed and the quality of the result. Compared with state-of-the-art result, we produce comparable or better results with faster computational speed.

  17. Reconstruction of Micropattern Detector Signals using Convolutional Neural Networks

    Science.gov (United States)

    Flekova, L.; Schott, M.

    2017-10-01

    Micropattern gaseous detector (MPGD) technologies, such as GEMs or MicroMegas, are particularly suitable for precision tracking and triggering in high rate environments. Given their relatively low production costs, MPGDs are an exemplary candidate for the next generation of particle detectors. Having acknowledged these advantages, both the ATLAS and CMS collaborations at the LHC are exploiting these new technologies for their detector upgrade programs in the coming years. When MPGDs are utilized for triggering purposes, the measured signals need to be precisely reconstructed within less than 200 ns, which can be achieved by the usage of FPGAs. In this work, we present a novel approach to identify reconstructed signals, their timing and the corresponding spatial position on the detector. In particular, we study the effect of noise and dead readout strips on the reconstruction performance. Our approach leverages the potential of convolutional neural network (CNNs), which have recently manifested an outstanding performance in a range of modeling tasks. The proposed neural network architecture of our CNN is designed simply enough, so that it can be modeled directly by an FPGA and thus provide precise information on reconstructed signals already in trigger level.

  18. An adjoint-based scheme for eigenvalue error improvement

    International Nuclear Information System (INIS)

    Merton, S.R.; Smedley-Stevenson, R.P.; Pain, C.C.; El-Sheikh, A.H.; Buchan, A.G.

    2011-01-01

    A scheme for improving the accuracy and reducing the error in eigenvalue calculations is presented. Using a rst order Taylor series expansion of both the eigenvalue solution and the residual of the governing equation, an approximation to the error in the eigenvalue is derived. This is done using a convolution of the equation residual and adjoint solution, which is calculated in-line with the primal solution. A defect correction on the solution is then performed in which the approximation to the error is used to apply a correction to the eigenvalue. The method is shown to dramatically improve convergence of the eigenvalue. The equation for the eigenvalue is shown to simplify when certain normalizations are applied to the eigenvector. Two such normalizations are considered; the rst of these is a fission-source type of normalisation and the second is an eigenvector normalisation. Results are demonstrated on a number of demanding elliptic problems using continuous Galerkin weighted nite elements. Moreover, the correction scheme may also be applied to hyperbolic problems and arbitrary discretization. This is not limited to spatial corrections and may be used throughout the phase space of the discrete equation. The applied correction not only improves fidelity of the calculation, it allows assessment of the reliability of numerical schemes to be made and could be used to guide mesh adaption algorithms or to automate mesh generation schemes. (author)

  19. DNCON2: improved protein contact prediction using two-level deep convolutional neural networks.

    Science.gov (United States)

    Adhikari, Badri; Hou, Jie; Cheng, Jianlin

    2018-05-01

    Significant improvements in the prediction of protein residue-residue contacts are observed in the recent years. These contacts, predicted using a variety of coevolution-based and machine learning methods, are the key contributors to the recent progress in ab initio protein structure prediction, as demonstrated in the recent CASP experiments. Continuing the development of new methods to reliably predict contact maps is essential to further improve ab initio structure prediction. In this paper we discuss DNCON2, an improved protein contact map predictor based on two-level deep convolutional neural networks. It consists of six convolutional neural networks-the first five predict contacts at 6, 7.5, 8, 8.5 and 10 Å distance thresholds, and the last one uses these five predictions as additional features to predict final contact maps. On the free-modeling datasets in CASP10, 11 and 12 experiments, DNCON2 achieves mean precisions of 35, 50 and 53.4%, respectively, higher than 30.6% by MetaPSICOV on CASP10 dataset, 34% by MetaPSICOV on CASP11 dataset and 46.3% by Raptor-X on CASP12 dataset, when top L/5 long-range contacts are evaluated. We attribute the improved performance of DNCON2 to the inclusion of short- and medium-range contacts into training, two-level approach to prediction, use of the state-of-the-art optimization and activation functions, and a novel deep learning architecture that allows each filter in a convolutional layer to access all the input features of a protein of arbitrary length. The web server of DNCON2 is at http://sysbio.rnet.missouri.edu/dncon2/ where training and testing datasets as well as the predictions for CASP10, 11 and 12 free-modeling datasets can also be downloaded. Its source code is available at https://github.com/multicom-toolbox/DNCON2/. chengji@missouri.edu. Supplementary data are available at Bioinformatics online.

  20. Using convolutional neural networks to estimate time-of-flight from PET detector waveforms

    Science.gov (United States)

    Berg, Eric; Cherry, Simon R.

    2018-01-01

    Although there have been impressive strides in detector development for time-of-flight positron emission tomography, most detectors still make use of simple signal processing methods to extract the time-of-flight information from the detector signals. In most cases, the timing pick-off for each waveform is computed using leading edge discrimination or constant fraction discrimination, as these were historically easily implemented with analog pulse processing electronics. However, now with the availability of fast waveform digitizers, there is opportunity to make use of more of the timing information contained in the coincident detector waveforms with advanced signal processing techniques. Here we describe the application of deep convolutional neural networks (CNNs), a type of machine learning, to estimate time-of-flight directly from the pair of digitized detector waveforms for a coincident event. One of the key features of this approach is the simplicity in obtaining ground-truth-labeled data needed to train the CNN: the true time-of-flight is determined from the difference in path length between the positron emission and each of the coincident detectors, which can be easily controlled experimentally. The experimental setup used here made use of two photomultiplier tube-based scintillation detectors, and a point source, stepped in 5 mm increments over a 15 cm range between the two detectors. The detector waveforms were digitized at 10 GS s-1 using a bench-top oscilloscope. The results shown here demonstrate that CNN-based time-of-flight estimation improves timing resolution by 20% compared to leading edge discrimination (231 ps versus 185 ps), and 23% compared to constant fraction discrimination (242 ps versus 185 ps). By comparing several different CNN architectures, we also showed that CNN depth (number of convolutional and fully connected layers) had the largest impact on timing resolution, while the exact network parameters, such as convolutional