WorldWideScience

Sample records for convolution model vliyanie

  1. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: 'Are we actually dealing with a convolutive mixture?'. We try to answer this question for EEG data....

  2. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  3. A Generative Model for Deep Convolutional Learning

    OpenAIRE

    Pu, Yunchen; Yuan, Xin; Carin, Lawrence

    2015-01-01

    A generative model is developed for deep (multi-layered) convolutional dictionary learning. A novel probabilistic pooling operation is integrated into the deep model, yielding efficient bottom-up (pretraining) and top-down (refinement) probabilistic learning. Experimental results demonstrate powerful capabilities of the model to learn multi-layer features from images, and excellent classification results are obtained on the MNIST and Caltech 101 datasets.

  4. Incomplete convolutions in production and inventory models

    NARCIS (Netherlands)

    Houtum, van G.J.; Zijm, W.H.M.

    1997-01-01

    In this paper, we study incomplete convolutions of continuous distribution functions, as they appear in the analysis of (multi-stage) production and inventory systems. Three example systems are discussed where these incomplete convolutions naturally arise. We derive explicit, nonrecursive formulae f

  5. Model Convolution: A Computational Approach to Digital Image Interpretation

    Science.gov (United States)

    Gardner, Melissa K.; Sprague, Brian L.; Pearson, Chad G.; Cosgrove, Benjamin D.; Bicek, Andrew D.; Bloom, Kerry; Salmon, E. D.

    2010-01-01

    Digital fluorescence microscopy is commonly used to track individual proteins and their dynamics in living cells. However, extracting molecule-specific information from fluorescence images is often limited by the noise and blur intrinsic to the cell and the imaging system. Here we discuss a method called “model-convolution,” which uses experimentally measured noise and blur to simulate the process of imaging fluorescent proteins whose spatial distribution cannot be resolved. We then compare model-convolution to the more standard approach of experimental deconvolution. In some circumstances, standard experimental deconvolution approaches fail to yield the correct underlying fluorophore distribution. In these situations, model-convolution removes the uncertainty associated with deconvolution and therefore allows direct statistical comparison of experimental and theoretical data. Thus, if there are structural constraints on molecular organization, the model-convolution method better utilizes information gathered via fluorescence microscopy, and naturally integrates experiment and theory. PMID:20461132

  6. CICAAR - Convolutive ICA with an Auto-Regressive Inverse Model

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Hansen, Lars Kai

    2004-01-01

    We invoke an auto-regressive IIR inverse model for convolutive ICA and derive expressions for the likelihood and its gradient. We argue that optimization will give a stable inverse. When there are more sensors than sources the mixing model parameters are estimated in a second step by least squares...

  7. A model of traffic signs recognition with convolutional neural network

    Science.gov (United States)

    Hu, Haihe; Li, Yujian; Zhang, Ting; Huo, Yi; Kuang, Wenqing

    2016-10-01

    In real traffic scenes, the quality of captured images are generally low due to some factors such as lighting conditions, and occlusion on. All of these factors are challengeable for automated recognition algorithms of traffic signs. Deep learning has provided a new way to solve this kind of problems recently. The deep network can automatically learn features from a large number of data samples and obtain an excellent recognition performance. We therefore approach this task of recognition of traffic signs as a general vision problem, with few assumptions related to road signs. We propose a model of Convolutional Neural Network (CNN) and apply the model to the task of traffic signs recognition. The proposed model adopts deep CNN as the supervised learning model, directly takes the collected traffic signs image as the input, alternates the convolutional layer and subsampling layer, and automatically extracts the features for the recognition of the traffic signs images. The proposed model includes an input layer, three convolutional layers, three subsampling layers, a fully-connected layer, and an output layer. To validate the proposed model, the experiments are implemented using the public dataset of China competition of fuzzy image processing. Experimental results show that the proposed model produces a recognition accuracy of 99.01 % on the training dataset, and yield a record of 92% on the preliminary contest within the fourth best.

  8. A convolution model of rock bed thermal storage units

    Science.gov (United States)

    Sowell, E. F.; Curry, R. L.

    1980-01-01

    A method is presented whereby a packed-bed thermal storage unit is dynamically modeled for bi-directional flow and arbitrary input flow stream temperature variations. The method is based on the principle of calculating the output temperature as the sum of earlier input temperatures, each multiplied by a predetermined 'response factor', i.e., discrete convolution. A computer implementation of the scheme, in the form of a subroutine for a widely used solar simulation program (TRNSYS) is described and numerical results compared with other models. Also, a method for efficient computation of the required response factors is described; this solution is for a triangular input pulse, previously unreported, although the solution method is also applicable for other input functions. This solution requires a single integration of a known function which is easily carried out numerically to the required precision.

  9. Modeling Task fMRI Data via Deep Convolutional Autoencoder.

    Science.gov (United States)

    Huang, Heng; Hu, Xintao; Zhao, Yu; Makkie, Milad; Dong, Qinglin; Zhao, Shijie; Guo, Lei; Liu, Tianming

    2017-06-15

    Task-based fMRI (tfMRI) has been widely used to study functional brain networks under task performance. Modeling tfMRI data is challenging due to at least two problems: the lack of the ground truth of underlying neural activity and the highly complex intrinsic structure of tfMRI data. To better understand brain networks based on fMRI data, data-driven approaches have been proposed, for instance, Independent Component Analysis (ICA) and Sparse Dictionary Learning (SDL). However, both ICA and SDL only build shallow models, and they are under the strong assumption that original fMRI signal could be linearly decomposed into time series components with their corresponding spatial maps. As growing evidence shows that human brain function is hierarchically organized, new approaches that can infer and model the hierarchical structure of brain networks are widely called for. Recently, deep convolutional neural network (CNN) has drawn much attention, in that deep CNN has proven to be a powerful method for learning high-level and mid-level abstractions from low-level raw data. Inspired by the power of deep CNN, in this study, we developed a new neural network structure based on CNN, called Deep Convolutional Auto-Encoder (DCAE), in order to take the advantages of both data-driven approach and CNN's hierarchical feature abstraction ability for the purpose of learning mid-level and high-level features from complex, large-scale tfMRI time series in an unsupervised manner. The DCAE has been applied and tested on the publicly available human connectome project (HCP) tfMRI datasets, and promising results are achieved.

  10. Pre-trained Convolutional Networks and generative statiscial models: a study in semi-supervised learning

    OpenAIRE

    John Michael Salgado Cebola

    2016-01-01

    Comparative study between the performance of Convolutional Networks using pretrained models and statistical generative models on tasks of image classification in semi-supervised enviroments.Study of multiple ensembles using these techniques and generated data from estimated pdfs.Pretrained Convents, LDA, pLSA, Fisher Vectors, Sparse-coded SPMs, TSVMs being the key models worked upon.

  11. Discretization of continuous convolution operators for accurate modeling of wave propagation in digital holography.

    Science.gov (United States)

    Chacko, Nikhil; Liebling, Michael; Blu, Thierry

    2013-10-01

    Discretization of continuous (analog) convolution operators by direct sampling of the convolution kernel and use of fast Fourier transforms is highly efficient. However, it assumes the input and output signals are band-limited, a condition rarely met in practice, where signals have finite support or abrupt edges and sampling is nonideal. Here, we propose to approximate signals in analog, shift-invariant function spaces, which do not need to be band-limited, resulting in discrete coefficients for which we derive discrete convolution kernels that accurately model the analog convolution operator while taking into account nonideal sampling devices (such as finite fill-factor cameras). This approach retains the efficiency of direct sampling but not its limiting assumption. We propose fast forward and inverse algorithms that handle finite-length, periodic, and mirror-symmetric signals with rational sampling rates. We provide explicit convolution kernels for computing coherent wave propagation in the context of digital holography. When compared to band-limited methods in simulations, our method leads to fewer reconstruction artifacts when signals have sharp edges or when using nonideal sampling devices.

  12. Assessing the Firing Properties of the Electrically Stimulated Auditory Nerve Using a Convolution Model

    NARCIS (Netherlands)

    Strahl, Stefan B; Ramekers, Dyan; Nagelkerke, Marjolijn M B; Schwarz, Konrad E; Spitzer, Philipp; Klis, Sjaak F L; Grolman, Wilko; Versnel, Huib

    2016-01-01

    The electrically evoked compound action potential (eCAP) is a routinely performed measure of the auditory nerve in cochlear implant users. Using a convolution model of the eCAP, additional information about the neural firing properties can be obtained, which may provide relevant information about th

  13. The beta-binomial convolution model for 2 × 2 tables with missing cell counts

    NARCIS (Netherlands)

    Eisinga, Rob

    2009-01-01

    This paper considers the beta-binomial convolution model for the analysis of 2×2 tables with missing cell counts.We discuss maximumlikelihood (ML) parameter estimation using the expectation–maximization algorithm and study information loss relative to complete data estimators. We also examine bias o

  14. A comparison of Gamma and Gaussian dynamic convolution models of the fMRI BOLD response.

    Science.gov (United States)

    Chen, Huafu; Yao, Dezhong; Liu, Zuxiang

    2005-01-01

    Blood oxygenation level-dependent (BOLD) contrast-based functional magnetic resonance imaging (fMRI) has been widely utilized to detect brain neural activities and great efforts are now stressed on the hemodynamic processes of different brain regions activated by a stimulus. The focus of this paper is the comparison of Gamma and Gaussian dynamic convolution models of the fMRI BOLD response. The convolutions are between the perfusion function of the neural response to a stimulus and a Gaussian or Gamma function. The parameters of the two models are estimated by a nonlinear least-squares optimal algorithm for the fMRI data of eight subjects collected in a visual stimulus experiment. The results show that the Gaussian model is better than the Gamma model in fitting the data. The model parameters are different in the left and right occipital regions, which indicate that the dynamic processes seem different in various cerebral functional regions.

  15. Vehicle Detection Based on Visual Saliency and Deep Sparse Convolution Hierarchical Model

    Institute of Scientific and Technical Information of China (English)

    CAI Yingfeng; WANG Hai; CHEN Xiaobo; GAO Li; CHEN Long

    2016-01-01

    Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification. These types of methods generally have high processing times and low vehicle detection performance. To address this issue, a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed. A visual saliency calculation is firstly used to generate a small vehicle candidate area. The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection. The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group, which outperforms the existing state-of-the-art algorithms. More importantly, high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.

  16. Vehicle detection based on visual saliency and deep sparse convolution hierarchical model

    Science.gov (United States)

    Cai, Yingfeng; Wang, Hai; Chen, Xiaobo; Gao, Li; Chen, Long

    2016-07-01

    Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification. These types of methods generally have high processing times and low vehicle detection performance. To address this issue, a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed. A visual saliency calculation is firstly used to generate a small vehicle candidate area. The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection. The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group, which outperforms the existing state-of-the-art algorithms. More importantly, high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.

  17. On Network-Error Correcting Convolutional Codes under the BSC Edge Error Model

    CERN Document Server

    Prasad, K

    2010-01-01

    Convolutional network-error correcting codes (CNECCs) are known to provide error correcting capability in acyclic instantaneous networks within the network coding paradigm under small field size conditions. In this work, we investigate the performance of CNECCs under the error model of the network where the edges are assumed to be statistically independent binary symmetric channels, each with the same probability of error $p_e$($0\\leq p_e<0.5$). We obtain bounds on the performance of such CNECCs based on a modified generating function (the transfer function) of the CNECCs. For a given network, we derive a mathematical condition on how small $p_e$ should be so that only single edge network-errors need to be accounted for, thus reducing the complexity of evaluating the probability of error of any CNECC. Simulations indicate that convolutional codes are required to possess different properties to achieve good performance in low $p_e$ and high $p_e$ regimes. For the low $p_e$ regime, convolutional codes with g...

  18. Maximal monotone model with delay term of convolution

    Directory of Open Access Journals (Sweden)

    Claude-Henri Lamarque

    2005-01-01

    Full Text Available Mechanical models are governed either by partial differential equations with boundary conditions and initial conditions (e.g., in the frame of continuum mechanics or by ordinary differential equations (e.g., after discretization via Galerkin procedure or directly from the model description with the initial conditions. In order to study dynamical behavior of mechanical systems with a finite number of degrees of freedom including nonsmooth terms (e.g., friction, we consider here problems governed by differential inclusions. To describe effects of particular constitutive laws, we add a delay term. In contrast to previous papers, we introduce delay via a Volterra kernel. We provide existence and uniqueness results by using an Euler implicit numerical scheme; then convergence with its order is established. A few numerical examples are given.

  19. KNOWLEDGE BASED 3D BUILDING MODEL RECOGNITION USING CONVOLUTIONAL NEURAL NETWORKS FROM LIDAR AND AERIAL IMAGERIES

    Directory of Open Access Journals (Sweden)

    F. Alidoost

    2016-06-01

    Full Text Available In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings’ roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings’ roofs automatically considering the complementary nature of height and RGB information.

  20. Knowledge Based 3d Building Model Recognition Using Convolutional Neural Networks from LIDAR and Aerial Imageries

    Science.gov (United States)

    Alidoost, F.; Arefi, H.

    2016-06-01

    In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.

  1. The Gaussian streaming model and convolution Lagrangian effective field theory

    Science.gov (United States)

    Vlah, Zvonimir; Castorina, Emanuele; White, Martin

    2016-12-01

    We update the ingredients of the Gaussian streaming model (GSM) for the redshift-space clustering of biased tracers using the techniques of Lagrangian perturbation theory, effective field theory (EFT) and a generalized Lagrangian bias expansion. After relating the GSM to the cumulant expansion, we present new results for the real-space correlation function, mean pairwise velocity and pairwise velocity dispersion including counter terms from EFT and bias terms through third order in the linear density, its leading derivatives and its shear up to second order. We discuss the connection to the Gaussian peaks formalism. We compare the ingredients of the GSM to a suite of large N-body simulations, and show the performance of the theory on the low order multipoles of the redshift-space correlation function and power spectrum. We highlight the importance of a general biasing scheme, which we find to be as important as higher-order corrections due to non-linear evolution for the halos we consider on the scales of interest to us.

  2. Cross-model convolutional neural network for multiple modality data representation

    OpenAIRE

    Wu, Yanbin; Wang, Li; Cui, Fan; Zhai, Hongbin; Dong, Baoming; Wang, Jim Jing-Yan

    2016-01-01

    A novel data representation method of convolutional neural net- work (CNN) is proposed in this paper to represent data of different modalities. We learn a CNN model for the data of each modality to map the data of differ- ent modalities to a common space, and regularize the new representations in the common space by a cross-model relevance matrix. We further impose that the class label of data points can also be predicted from the CNN representa- tions in the common space. The learning proble...

  3. Implementation of FFT convolution and multigrid superposition models in the FOCUS RTP system

    Science.gov (United States)

    Miften, Moyed; Wiesmeyer, Mark; Monthofer, Suzanne; Krippner, Ken

    2000-04-01

    In radiotherapy treatment planning, convolution/superposition algorithms currently represent the best practical approach for accurate photon dose calculation in heterogeneous tissues. In this work, the implementation, accuracy and performance of the FFT convolution (FFTC) and multigrid superposition (MGS) algorithms are presented. The FFTC and MGS models use the same `TERMA' calculation and are commissioned using the same parameters. Both models use the same spectra, incorporate the same off-axis softening and base incident lateral fluence on the same measurements. In addition, corrections are explicitly applied to the polyenergetic and parallel kernel approximations, and electron contamination is modelled. Spectra generated by Monte Carlo (MC) modelling of treatment heads are used. Calculations using the MC spectra were in excellent agreement with measurements for many linear accelerator types. To speed up the calculations, a number of calculation techniques were implemented, including separate primary and scatter dose calculation, the FFT technique which assumes kernel invariance for the convolution calculation and a multigrid (MG) acceleration technique for the superposition calculation. Timing results show that the FFTC model is faster than MGS by a factor of 4 and 8 for small and large field sizes, respectively. Comparisons with measured data and BEAM MC results for a wide range of clinical beam setups show that (a) FFTC and MGS doses match measurements to better than 2% or 2 mm in homogeneous media; (b) MGS is more accurate than FFTC in lung phantoms where MGS doses are within 3% or 3 mm of BEAM results and (c) FFTC overestimates the dose in lung by a maximum of 9% compared to BEAM.

  4. Convolution Model of a Queueing System with the cFIFO Service Discipline

    Directory of Open Access Journals (Sweden)

    Sławomir Hanczewski

    2016-01-01

    Full Text Available This article presents an approximate convolution model of a multiservice queueing system with the continuous FIFO (cFIFO service discipline. The model makes it possible to service calls sequentially with variable bit rate, determined by unoccupied (free resources of the multiservice server. As compared to the FIFO discipline, the cFIFO queue utilizes the resources of a multiservice server more effectively. The assumption in the model is that the queueing system is offered a mixture of independent multiservice Bernoulli-Poisson-Pascal (BPP call streams. The article also discusses the results of modelling a number of queueing systems to which different, non-Poissonian, call streams are offered. To verify the accuracy of the model, the results of the analytical calculations are compared with the results of simulation experiments for a number of selected queueing systems. The study has confirmed the accuracy of all adopted theoretical assumptions for the proposed analytical model.

  5. Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.

    Science.gov (United States)

    Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus

    2017-01-01

    Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.

  6. Compressing Convolutional Neural Networks

    OpenAIRE

    Chen, Wenlin; Wilson, James T.; Tyree, Stephen; Weinberger, Kilian Q.; Chen, Yixin

    2015-01-01

    Convolutional neural networks (CNN) are increasingly used in many areas of computer vision. They are particularly attractive because of their ability to "absorb" great quantities of labeled data through millions of parameters. However, as model sizes increase, so do the storage and memory requirements of the classifiers. We present a novel network architecture, Frequency-Sensitive Hashed Nets (FreshNets), which exploits inherent redundancy in both convolutional layers and fully-connected laye...

  7. Assessing the Firing Properties of the Electrically Stimulated Auditory Nerve Using a Convolution Model.

    Science.gov (United States)

    Strahl, Stefan B; Ramekers, Dyan; Nagelkerke, Marjolijn M B; Schwarz, Konrad E; Spitzer, Philipp; Klis, Sjaak F L; Grolman, Wilko; Versnel, Huib

    2016-01-01

    The electrically evoked compound action potential (eCAP) is a routinely performed measure of the auditory nerve in cochlear implant users. Using a convolution model of the eCAP, additional information about the neural firing properties can be obtained, which may provide relevant information about the health of the auditory nerve. In this study, guinea pigs with various degrees of nerve degeneration were used to directly relate firing properties to nerve histology. The same convolution model was applied on human eCAPs to examine similarities and ultimately to examine its clinical applicability. For most eCAPs, the estimated nerve firing probability was bimodal and could be parameterised by two Gaussian distributions with an average latency difference of 0.4 ms. The ratio of the scaling factors of the late and early component increased with neural degeneration in the guinea pig. This ratio decreased with stimulation intensity in humans. The latency of the early component decreased with neural degeneration in the guinea pig. Indirectly, this was observed in humans as well, assuming that the cochlear base exhibits more neural degeneration than the apex. Differences between guinea pigs and humans were observed, among other parameters, in the width of the early component: very robust in guinea pig, and dependent on stimulation intensity and cochlear region in humans. We conclude that the deconvolution of the eCAP is a valuable addition to existing analyses, in particular as it reveals two separate firing components in the auditory nerve.

  8. Revision of the theory of tracer transport and the convolution model of dynamic contrast enhanced magnetic resonance imaging.

    Science.gov (United States)

    Keeling, Stephen L; Bammer, Roland; Stollberger, Rudolf

    2007-09-01

    Counterexamples are used to motivate the revision of the established theory of tracer transport. Then dynamic contrast enhanced magnetic resonance imaging in particular is conceptualized in terms of a fully distributed convection-diffusion model from which a widely used convolution model is derived using, alternatively, compartmental discretizations or semigroup theory. On this basis, applications and limitations of the convolution model are identified. For instance, it is proved that perfusion and tissue exchange states cannot be identified on the basis of a single convolution equation alone. Yet under certain assumptions, particularly that flux is purely convective at the boundary of a tissue region, physiological parameters such as mean transit time, effective volume fraction, and volumetric flow rate per unit tissue volume can be deduced from the kernel.

  9. The Luminous Convolution Model as an alternative to dark matter in spiral galaxies

    CERN Document Server

    Cisneros, S; Formaggio, J A; Ott, R A; Chester, D; Battaglia, D J; Ashley, A; Robinson, R; Rodriguez, A

    2014-01-01

    The Luminous Convolution Model (LCM) demonstrates that it is possible to predict the rotation curves of spiral galaxies directly from estimates of the luminous matter. We consider two frame-dependent effects on the light observed from other galaxies: relative velocity and relative curvature. With one free parameter, we predict the rotation curves of twenty-three (23) galaxies represented in forty-two (42) data sets. Relative curvature effects rely upon knowledge of both the gravitational potential from luminous mass of the emitting galaxy and the receiving galaxy, and so each emitter galaxy is compared to four (4) different Milky Way luminous mass models. On average in this sample, the LCM is more successful than either dark matter or modified gravity models in fitting the observed rotation curve data. Implications of LCM constraints on populations synthesis modeling are discussed in this paper. This paper substantially expands the results in arXiv:1309.7370.

  10. A New General Linear Convolution Model for fMRI Data Process

    Institute of Scientific and Technical Information of China (English)

    YUAN Hong; CHEN Hua-fu; YAO De-zhong

    2005-01-01

    General linear model (GLM) is the most popular method for functional magnetic resource imaging (fMRI) data analysis. However, its theory is imperfect. The key of this model is how to constitute the design-matrix to model the interesting effects better and separate noises better. For the purpose of detecting brain function activation, according to the principle of GLM, a new convolution model is presented by a new dynamic function convolving with design-matrix, which combining with t-test can be used to detect brain active signal. The fMRI imaging result of visual stimulus experiment indicates that brain activities mainly concentrate among vland v2 areas of visual cortex, and also verified the validity of this technique.

  11. Renormalization plus convolution method for atomic-scale modeling of electrical and thermal transport in nanowires.

    Science.gov (United States)

    Wang, Chumin; Salazar, Fernando; Sánchez, Vicenta

    2008-12-01

    Based on the Kubo-Greenwood formula, the transport of electrons and phonons in nanowires is studied by means of a real-space renormalization plus convolution method. This method has the advantage of being efficient, without introducing additional approximations and capable to analyze nanowires of a wide range of lengths even with defects. The Born and tight-binding models are used to investigate the lattice thermal and electrical conductivities, respectively. The results show a quantized electrical dc conductance, which is attenuated when an oscillating electric field is applied. Effects of single and multiple planar defects, such as a quasi-periodic modulation, on the conductance of nanowires are also investigated. For the low temperature region, the lattice thermal conductance reveals a power-law temperature dependence, in agreement with experimental data.

  12. Convolution symmetries of integrable hierarchies, matrix models and $\\tau$-functions

    CERN Document Server

    Harnad, J

    2009-01-01

    Generalized convolution symmetries of integrable hierarchies of KP-Toda and 2KP-Toda type have the effect of multiplying the Fourier coefficients of the Baker-Akhiezer function by a specified sequence of constants. The induced action on the associated fermionic Fock space is diagonal in the standard orthonormal base determined by occupation sites and labeled by partitions. The coefficients in the single and double Schur function expansions of the associated $\\tau$-functions, which are the Pl\\"ucker coordinates of a decomposable element, are multiplied by the corresponding diagonal factors. Applying such transformations to matrix integrals, we obtain new matrix models of externally coupled type which are also KP-Toda or 2KP-Toda $\\tau$-functions. More general multiple integral representations of tau functions are similarly obtained, as well as finite determinantal expressions for them.

  13. Nonstationary, Nonparametric, Nonseparable Bayesian Spatio-Temporal Modeling using Kernel Convolution of Order Based Dependent Dirichlet Process

    OpenAIRE

    Das, Moumita; Bhattacharya, Sourabh

    2014-01-01

    In this paper, using kernel convolution of order based dependent Dirichlet process (Griffin & Steel (2006)) we construct a nonstationary, nonseparable, nonparametric space-time process, which, as we show, satisfies desirable properties, and includes the stationary, separable, parametric processes as special cases. We also investigate the smoothness properties of our proposed model. Since our model entails an infinite random series, for Bayesian model fitting purpose we must either truncate th...

  14. ICA if fMRI based on a convolutive mixture model

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2003-01-01

    mixing relevant for spatial ICA. Convolutive ICA has many computational problems and no standard solution is available. In this study a new predictive estimation method is used for finding the mixing coefficients and the source signals of a convolutive mixture and it is applied in temporal mode...... challenge with previous independent component analyses is the convolutive nature of the mixing process in fMRI. In temporal ICA we assume that the measured fMRI response is an instantaneous, spatially varying, mixture of independent time functions. However, the convolutive structure of the hemodynamic....... The mixing is represented by “mixture coefficient images” quantifying the local response to a given source at a certain time lag. This is the first communication to address this important issue in the context of fMRI ICA. Data: A single slice holding 128x128 pixels and passing through primary visual cortex...

  15. Convolution copula econometrics

    CERN Document Server

    Cherubini, Umberto; Mulinacci, Sabrina

    2016-01-01

    This book presents a novel approach to time series econometrics, which studies the behavior of nonlinear stochastic processes. This approach allows for an arbitrary dependence structure in the increments and provides a generalization with respect to the standard linear independent increments assumption of classical time series models. The book offers a solution to the problem of a general semiparametric approach, which is given by a concept called C-convolution (convolution of dependent variables), and the corresponding theory of convolution-based copulas. Intended for econometrics and statistics scholars with a special interest in time series analysis and copula functions (or other nonparametric approaches), the book is also useful for doctoral students with a basic knowledge of copula functions wanting to learn about the latest research developments in the field.

  16. Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network

    Science.gov (United States)

    Li, Na; Yang, Yongjia

    2016-01-01

    Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly. PMID:27803711

  17. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  18. Lumped convolution integral models revisited: on the meaningfulness of inter catchment comparisons

    Directory of Open Access Journals (Sweden)

    S. Seeger

    2014-06-01

    Full Text Available The transit time distribution of a catchment is linked to the water storage potential and affects the susceptibility of a catchment to pollution. However, this characteristic of a catchment is still problematic to determine within a catchment and to predict among catchments based on physiographic or geological properties. In this study, lumped response and transit time convolution models coupled with a distributed physically based snow model were applied to simulate the stable water isotope compositions in stream discharge measured fortnightly in 24 meso-scale catchments in Switzerland. Three different types of transfer function (exponential, gamma distribution and two parallel linear reservoirs in two different implementation variants (strictly mathematical and normalised were optimised and compared. The derived mean transit times varied widely for one and the same catchment depending on the chosen transfer function, even when the model simulations led to very similar predictions of the tracer signal. Upon closer inspection of the transit time distributions, it appeared that two transfer functions mainly have to agree on an intermediate time scale around three months to reach similarly good prediction results in respect to fortnightly discharge samples, while their short-term and long-term behaviour seem to be of minor importance for the evaluation of the models. A couple of topographic indices showed significant correlations with the derived mean transit times. However, the collinearity of those indices, which were also correlated to mean annual precipitation sums, and the differing results among the different transfer functions, did not allow for the clear identification of one predictive topographical index. As a by-product of this study, a spatial interpolation method for monthly isotope concentrations in precipitation with modest input data requirement was developed and tested.

  19. Embedded Analytical Solutions Improve Accuracy in Convolution-Based Particle Tracking Models using Python

    Science.gov (United States)

    Starn, J. J.

    2013-12-01

    Particle tracking often is used to generate particle-age distributions that are used as impulse-response functions in convolution. A typical application is to produce groundwater solute breakthrough curves (BTC) at endpoint receptors such as pumping wells or streams. The commonly used semi-analytical particle-tracking algorithm based on the assumption of linear velocity gradients between opposing cell faces is computationally very fast when used in combination with finite-difference models. However, large gradients near pumping wells in regional-scale groundwater-flow models often are not well represented because of cell-size limitations. This leads to inaccurate velocity fields, especially at weak sinks. Accurate analytical solutions for velocity near a pumping well are available, and various boundary conditions can be imposed using image-well theory. Python can be used to embed these solutions into existing semi-analytical particle-tracking codes, thereby maintaining the integrity and quality-assurance of the existing code. Python (and associated scientific computational packages NumPy, SciPy, and Matplotlib) is an effective tool because of its wide ranging capability. Python text processing allows complex and database-like manipulation of model input and output files, including binary and HDF5 files. High-level functions in the language include ODE solvers to solve first-order particle-location ODEs, Gaussian kernel density estimation to compute smooth particle-age distributions, and convolution. The highly vectorized nature of NumPy arrays and functions minimizes the need for computationally expensive loops. A modular Python code base has been developed to compute BTCs using embedded analytical solutions at pumping wells based on an existing well-documented finite-difference groundwater-flow simulation code (MODFLOW) and a semi-analytical particle-tracking code (MODPATH). The Python code base is tested by comparing BTCs with highly discretized synthetic steady

  20. Modelling the semivariograms and cross-semivariograms required in downscaling cokriging by numerical convolution deconvolution

    Science.gov (United States)

    Pardo-Igúzquiza, Eulogio; Atkinson, Peter M.

    2007-10-01

    close to the origin presented in regularized semivariograms and cross-semivariograms. The solution proposed is to find by numerical deconvolution a positive-definite set of point covariances and cross-covariances and then any required model may be obtained by numerical convolution of the corresponding point model. The first step implies several numerical deconvolutions where some model parameters are fixed, while others are estimated using the available experimental semivariograms and cross-semivariograms, and some goodness-of-fit measure. The details of the proposed procedure are presented and illustrated with an example from remote sensing.

  1. Age-distribution estimation for karst groundwater: Issues of parameterization and complexity in inverse modeling by convolution

    Science.gov (United States)

    Long, A.J.; Putnam, L.D.

    2009-01-01

    Convolution modeling is useful for investigating the temporal distribution of groundwater age based on environmental tracers. The framework of a quasi-transient convolution model that is applicable to two-domain flow in karst aquifers is presented. The model was designed to provide an acceptable level of statistical confidence in parameter estimates when only chlorofluorocarbon (CFC) and tritium (3H) data are available. We show how inverse modeling and uncertainty assessment can be used to constrain model parameterization to a level warranted by available data while allowing major aspects of the flow system to be examined. As an example, the model was applied to water from a pumped well open to the Madison aquifer in central USA with input functions of CFC-11, CFC-12, CFC-113, and 3H, and was calibrated to several samples collected during a 16-year period. A bimodal age distribution was modeled to represent quick and slow flow less than 50 years old. The effects of pumping and hydraulic head on the relative volumetric fractions of these domains were found to be influential factors for transient flow. Quick flow and slow flow were estimated to be distributed mainly within the age ranges of 0-2 and 26-41 years, respectively. The fraction of long-term flow (>50 years) was estimated but was not dateable. The different tracers had different degrees of influence on parameter estimation and uncertainty assessments, where 3H was the most critical, and CFC-113 was least influential.

  2. Modelling of nonlinear bridge aerodynamics and aeroelasticity: a convolution based approach

    Directory of Open Access Journals (Sweden)

    Wu T.

    2012-07-01

    Full Text Available Innovative bridge decks exhibit nonlinear behaviour in wind tunnel studies which has placed increasing importance on the nonlinear bridge aerodynamics/aeroelasticity considerations for long-span bridges. The convolution scheme concerning the first-order kernels for linear analysis is reviewed, which is followed by an introduction to higher-order kernels for nonlinear analysis. A numerical example of a longspan suspension bridge is presented that demonstrates the efficacy of the proposed scheme.

  3. Steady-state modeling of current loss in a post-hole convolute driven by high power magnetically insulated transmission lines

    Science.gov (United States)

    Madrid, E. A.; Rose, D. V.; Welch, D. R.; Clark, R. E.; Mostrom, C. B.; Stygar, W. A.; Cuneo, M. E.; Gomez, M. R.; Hughes, T. P.; Pointon, T. D.; Seidel, D. B.

    2013-12-01

    Quasiequilibrium power flow in two radial magnetically insulated transmission lines (MITLs) coupled to a vacuum post-hole convolute is studied at 50TW-200TW using three-dimensional particle-in-cell simulations. The key physical dimensions in the model are based on the ZR accelerator [D. H. McDaniel, et al., Proceedings of 5th International Conference on Dense Z-Pinches, edited by J. Davis (AIP, New York, 2002), p. 23]. The voltages assumed for this study result in electron emission from all cathode surfaces. Electrons emitted from the MITL cathodes upstream of the convolute cause a portion of the MITL current to be carried by an electron sheath. Under the simplifying assumptions made by the simulations, it is found that the transition from the two MITLs to the convolute results in the loss of most of the sheath current to anode structures. The loss is quantified as a function of radius and correlated with Poynting vector stream lines which would be followed by individual electrons. For a fixed MITL-convolute geometry, the current loss, defined to be the difference between the total (i.e. anode) current in the system upstream of the convolute and the current delivered to the load, increases with both operating voltage and load impedance. It is also found that in the absence of ion emission, the convolute is efficient when the load impedance is much less than the impedance of the two parallel MITLs. The effects of space-charge-limited (SCL) ion emission from anode surfaces are considered for several specific cases. Ion emission from anode surfaces in the convolute is found to increase the current loss by a factor of 2-3. When SCL ion emission is allowed from anode surfaces in the MITLs upstream of the convolute, substantially higher current losses are obtained. Note that the results reported here are valid given the spatial resolution used for the simulations.

  4. Closed-form solution of the convolution integral in the magnetic resonance dispersion model for quantitative assessment of angiogenesis.

    Science.gov (United States)

    Turco, S; Janssen, A J E M; Lavini, C; de la Rosette, J J; Wijkstra, H; Mischi, M

    2014-01-01

    Prostate cancer (PCa) diagnosis and treatment is still limited due to the lack of reliable imaging methods for cancer localization. Based on the fundamental role played by angiogenesis in cancer growth and development, several dynamic contrast enhanced (DCE) imaging methods have been developed to probe tumor angiogenic vasculature. In DCE magnetic resonance imaging (MRI), pharmacokinetic modeling allows estimating quantitative parameters related to the physiology underlying tumor angiogenesis. In particular, novel magnetic resonance dispersion imaging (MRDI) enables quantitative assessment of the microvascular architecture and leakage, by describing the intravascular dispersion kinetics of an extravascular contrast agent with a dispersion model. According to this model, the tissue contrast concentration at each voxel is given by the convolution between the intravascular concentration, described as a Brownian motion process according to the convective-dispersion equation, with the interstitium impulse response, represented by a mono-exponential decay, and describing the contrast leakage in the extravascular space. In this work, an improved formulation of the MRDI method is obtained by providing an analytical solution for the convolution integral present in the dispersion model. The performance of the proposed method was evaluated by means of dedicated simulations in terms of estimation accuracy, precision, and computation time. Moreover, a preliminary clinical validation was carried out in five patients with proven PCa. The proposed method allows for a reduction by about 40% of computation time without any significant change in estimation accuracy and precision, and in the clinical performance.

  5. Experimental validation of a convolution- based ultrasound image formation model using a planar arrangement of micrometer-scale scatterers.

    Science.gov (United States)

    Gyöngy, Miklós; Makra, Ákos

    2015-06-01

    The shift-invariant convolution model of ultrasound is widely used in the literature, for instance to generate fast simulations of ultrasound images. However, comparison of the resulting simulations with experiments is either qualitative or based on aggregate descriptors such as envelope statistics or spectral components. In the current work, a planar arrangement of 49-μm polystyrene microspheres was imaged using macrophotography and a 4.7-MHz ultrasound linear array. The macrophotograph allowed estimation of the scattering function (SF) necessary for simulations. Using the coefficient of determination R(2) between real and simulated ultrasound images, different estimates of the SF and point spread function (PSF) were tested. All estimates of the SF performed similarly, whereas the best estimate of the PSF was obtained by Hanningwindowing the deconvolution of the real ultrasound image with the SF: this yielded R(2) = 0.43 for the raw simulated image and R(2) = 0.65 for the envelope-detected ultrasound image. R(2) was highly dependent on microsphere concentration, with values of up to 0.99 for regions with scatterers. The results validate the use of the shift-invariant convolution model for the realistic simulation of ultrasound images. However, care needs to be taken in experiments to reduce the relative effects of other sources of scattering such as from multiple reflections, either by increasing the concentration of imaged scatterers or by more careful experimental design.

  6. A comparison study between MLP and convolutional neural network models for character recognition

    Science.gov (United States)

    Ben Driss, S.; Soua, M.; Kachouri, R.; Akil, M.

    2017-05-01

    Optical Character Recognition (OCR) systems have been designed to operate on text contained in scanned documents and images. They include text detection and character recognition in which characters are described then classified. In the classification step, characters are identified according to their features or template descriptions. Then, a given classifier is employed to identify characters. In this context, we have proposed the unified character descriptor (UCD) to represent characters based on their features. Then, matching was employed to ensure the classification. This recognition scheme performs a good OCR Accuracy on homogeneous scanned documents, however it cannot discriminate characters with high font variation and distortion.3 To improve recognition, classifiers based on neural networks can be used. The multilayer perceptron (MLP) ensures high recognition accuracy when performing a robust training. Moreover, the convolutional neural network (CNN), is gaining nowadays a lot of popularity for its high performance. Furthermore, both CNN and MLP may suffer from the large amount of computation in the training phase. In this paper, we establish a comparison between MLP and CNN. We provide MLP with the UCD descriptor and the appropriate network configuration. For CNN, we employ the convolutional network designed for handwritten and machine-printed character recognition (Lenet-5) and we adapt it to support 62 classes, including both digits and characters. In addition, GPU parallelization is studied to speed up both of MLP and CNN classifiers. Based on our experimentations, we demonstrate that the used real-time CNN is 2x more relevant than MLP when classifying characters.

  7. Independent Component Analysis in a convoluted world

    DEFF Research Database (Denmark)

    Dyrholm, Mads

    2006-01-01

    instantaneousICA, then select a physiologically interesting subspace, then remove the delayed temporal dependencies among the instantaneous ICA components by using convolutive ICA. By Bayesian model selection, in a real world EEG data set, it is shown that convolutive ICA is a better model for EEG than...

  8. Convolution-variation separation method for efficient modeling of optical lithography.

    Science.gov (United States)

    Liu, Shiyuan; Zhou, Xinjiang; Lv, Wen; Xu, Shuang; Wei, Haiqing

    2013-07-01

    We propose a general method called convolution-variation separation (CVS) to enable efficient optical imaging calculations without sacrificing accuracy when simulating images for a wide range of process variations. The CVS method is derived from first principles using a series expansion, which consists of a set of predetermined basis functions weighted by a set of predetermined expansion coefficients. The basis functions are independent of the process variations and thus may be computed and stored in advance, while the expansion coefficients depend only on the process variations. Optical image simulations for defocus and aberration variations with applications in robust inverse lithography technology and lens aberration metrology have demonstrated the main concept of the CVS method.

  9. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    Science.gov (United States)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to

  10. A convolution model for obtaining the response of an ionization chamber in static non standard fields

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez-Castano, D. M.; Gonzalez, L. Brualla; Gago-Arias, M. A.; Pardo-Montero, J.; Gomez, F.; Luna-Vega, V.; Sanchez, M.; Lobato, R. [Radiation Physics Laboratory, Universidad de Santiago de Compostela, 15782 (Spain) and Dpto de Fisica de Particulas, Universidad de Santiago de Compostela, 15782 (Spain); Servicio de Radiofisica ERESA, Consorcio Hospital General Universitario de Valencia, 46014 (Spain); Dpto de Fisica de Particulas, Universidad de Santiago de Compostela, 15782 (Spain); Radiation Physics Laboratory, Universidad de Santiago de Compostela, 15782 Spain and Dpto de Fisica de Particulas, Universidad de Santiago de Compostela, 15782 (Spain); Servicio de Radiofisica y Proteccion Radiologica, Hospital Clinico Universitario de Santiago, Santiago de Compostela, 15782 (Spain)

    2012-01-15

    Purpose: This work contains an alternative methodology for obtaining correction factors for ionization chamber (IC) dosimetry of small fields and composite fields such as IMRT. The method is based on the convolution/superposition (C/S) of an IC response function (RF) with the dose distribution in a certain plane which includes chamber position. This method is an alternative to the full Monte Carlo (MC) approach that has been used previously by many authors for the same objective. Methods: The readout of an IC at a point inside a phantom irradiated by a certain beam can be obtained as the convolution of the dose spatial distribution caused by the beam and the IC two-dimensional RF. The proposed methodology has been applied successfully to predict the response of a PTW 30013 IC when measuring different nonreference fields, namely: output factors of 6 MV small fields, beam profiles of cobalt 60 narrow fields and 6 MV radiosurgery segments. The two-dimensional RF of a PTW 30013 IC was obtained by MC simulation of the absorbed dose to cavity air when the IC was scanned by a 0.6 x 0.6 mm{sup 2} cross section parallel pencil beam at low depth in a water phantom. For each of the cases studied, the results of the IC direct measurement were compared with the corresponding obtained by the C/S method. Results: For all of the cases studied, the agreement between the IC direct measurement and the IC calculated response was excellent (better than 1.5%). Conclusions: This method could be implemented in TPS in order to calculate dosimetry correction factors when an experimental IMRT treatment verification with in-phantom ionization chamber is performed. The miss-response of the IC due to the nonreference conditions could be quickly corrected by this method rather than employing MC derived correction factors. This method can be considered as an alternative to the plan-class associated correction factors proposed recently as part of an IAEA work group on nonstandard field dosimetry.

  11. Steady-state modeling of current loss in a post-hole convolute driven by high power magnetically insulated transmission lines

    Directory of Open Access Journals (Sweden)

    E. A. Madrid

    2013-12-01

    Full Text Available Quasiequilibrium power flow in two radial magnetically insulated transmission lines (MITLs coupled to a vacuum post-hole convolute is studied at 50  TW–200  TW using three-dimensional particle-in-cell simulations. The key physical dimensions in the model are based on the ZR accelerator [D. H. McDaniel, et al., Proceedings of 5th International Conference on Dense Z-Pinches, edited by J. Davis (AIP, New York, 2002, p. 23]. The voltages assumed for this study result in electron emission from all cathode surfaces. Electrons emitted from the MITL cathodes upstream of the convolute cause a portion of the MITL current to be carried by an electron sheath. Under the simplifying assumptions made by the simulations, it is found that the transition from the two MITLs to the convolute results in the loss of most of the sheath current to anode structures. The loss is quantified as a function of radius and correlated with Poynting vector stream lines which would be followed by individual electrons. For a fixed MITL-convolute geometry, the current loss, defined to be the difference between the total (i.e. anode current in the system upstream of the convolute and the current delivered to the load, increases with both operating voltage and load impedance. It is also found that in the absence of ion emission, the convolute is efficient when the load impedance is much less than the impedance of the two parallel MITLs. The effects of space-charge-limited (SCL ion emission from anode surfaces are considered for several specific cases. Ion emission from anode surfaces in the convolute is found to increase the current loss by a factor of 2–3. When SCL ion emission is allowed from anode surfaces in the MITLs upstream of the convolute, substantially higher current losses are obtained. Note that the results reported here are valid given the spatial resolution used for the simulations.

  12. General logarithmic image processing convolution.

    Science.gov (United States)

    Palomares, Jose M; González, Jesús; Ros, Eduardo; Prieto, Alberto

    2006-11-01

    The logarithmic image processing model (LIP) is a robust mathematical framework, which, among other benefits, behaves invariantly to illumination changes. This paper presents, for the first time, two general formulations of the 2-D convolution of separable kernels under the LIP paradigm. Although both formulations are mathematically equivalent, one of them has been designed avoiding the operations which are computationally expensive in current computers. Therefore, this fast LIP convolution method allows to obtain significant speedups and is more adequate for real-time processing. In order to support these statements, some experimental results are shown in Section V.

  13. Application of Convolution Perfectly Matched Layer in MRTD scattering model for non-spherical aerosol particles and its performance analysis

    Science.gov (United States)

    Hu, Shuai; Gao, Taichang; Li, Hao; Yang, Bo; Jiang, Zidong; Liu, Lei; Chen, Ming

    2017-10-01

    The performance of absorbing boundary condition (ABC) is an important factor influencing the simulation accuracy of MRTD (Multi-Resolution Time-Domain) scattering model for non-spherical aerosol particles. To this end, the Convolution Perfectly Matched Layer (CPML), an excellent ABC in FDTD scheme, is generalized and applied to the MRTD scattering model developed by our team. In this model, the time domain is discretized by exponential differential scheme, and the discretization of space domain is implemented by Galerkin principle. To evaluate the performance of CPML, its simulation results are compared with those of BPML (Berenger's Perfectly Matched Layer) and ADE-PML (Perfectly Matched Layer with Auxiliary Differential Equation) for spherical and non-spherical particles, and their simulation errors are analyzed as well. The simulation results show that, for scattering phase matrices, the performance of CPML is better than that of BPML; the computational accuracy of CPML is comparable to that of ADE-PML on the whole, but at scattering angles where phase matrix elements fluctuate sharply, the performance of CPML is slightly better than that of ADE-PML. After orientation averaging process, the differences among the results of different ABCs are reduced to some extent. It also can be found that ABCs have a much weaker influence on integral scattering parameters (such as extinction and absorption efficiencies) than scattering phase matrices, this phenomenon can be explained by the error averaging process in the numerical volume integration.

  14. The influence of noise exposure on the parameters of a convolution model of the compound action potential.

    Science.gov (United States)

    Chertoff, M E; Lichtenhan, J T; Tourtillott, B M; Esau, K S

    2008-10-01

    The influence of noise exposure on the parameters of a convolution model of the compound action potential (CAP) was examined. CAPs were recorded in normal-hearing gerbils and in gerbils exposed to a 117 dB SPL 8 kHz band of noise for various durations. The CAPs were fitted with an analytic CAP to obtain the parameters representing the number of nerve fibers (N), the probability density function [P(t)] from a population of nerve fibers, and the single-unit waveform [U(t)]. The results showed that the analytic CAP fitted the physiologic CAPs well with correlations of approximately 0.90. A subsequent analysis using hierarchical linear modeling quantified the change in the parameters as a function of both signal level and hearing threshold. The results showed that noise exposure caused some of the parameter-level functions to simply shift along the signal level axis in proportion to the amount of hearing loss, whereas others shifted along the signal level axis and steepened. Significant changes occurred in the U(t) parameters, but they were not related to hearing threshold. These results suggest that noise exposure alters the physiology underlying the CAP, some of which can be explained by a simple lack of gain, whereas others may not.

  15. ICA if fMRI based on a convolutive mixture model

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2003-01-01

    processing strategies. Global linear dependencies can be probed by independent component analysis (ICA) based on higher order statistics or spatio-temporal properties. With ICA we separate the different sources of the fMRI signal. ICA can be performed assuming either spatial or temporal independency. A major....... The mixing is represented by “mixture coefficient images” quantifying the local response to a given source at a certain time lag. This is the first communication to address this important issue in the context of fMRI ICA. Data: A single slice holding 128x128 pixels and passing through primary visual cortex......Modeling & Analysis Abstract The fMRI signal has many sources: Stimulus induced activation, other brain activations, confounds including several physiological signal components, the most prominent being the cardiac pulsation at about 1 Hz, and breathing induced motion (0.2-1 Hz). Most fMRI data...

  16. Convolutive ICA for Spatio-Temporal Analysis of EEG

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai

    2007-01-01

    in the convolutive model can be correctly detected using Bayesian model selection. We demonstrate a framework for deconvolving an EEG ICA subspace. Initial results suggest that in some cases convolutive mixing may be a more realistic model for EEG signals than the instantaneous ICA model....

  17. Efficient convolutional sparse coding

    Energy Technology Data Exchange (ETDEWEB)

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  18. CONVOLUTION-BASED DETECTION MODELS ACCELERATION BASED ON GPU%基于 GPU 的卷积检测模型加速

    Institute of Scientific and Technical Information of China (English)

    刘琦; 黄咨; 陈璐艳; 胡福乔

    2016-01-01

    In recent years,convolution-based detection models (CDM),such as the deformable part-based models (DPM)and the convolutional neural networks (CNN),have achieved tremendous success in computer vision field.These models allow for large-scale machine learning training to achieve higher robustness and recognition performance.However,the huge computational cost of convolution operation in training and evaluation processes also restricts their further application in many practical scenes.In this paper,we accelerate both the algorithm and hardware of convolution-based detection models with mathematical theory and parallelisation technique.In the aspect of algorithm,we reduce the computation complexity by converting the convolution operation in space domain to the point multiplication operation in frequency domain.While in the aspect of hardware,the use of graphical process unit (GPU)parallelisation technique can reduce the computational time further.Results of experiment on public dataset Pascal VOC demonstrate that compared with multi-core CPU,the proposed algorithm can realise speeding up the convolution process by 2.13 to 4.31 times on single commodity GPU.%近年来,形变部件模型和卷积神经网络等卷积检测模型在计算机视觉领域取得了极大的成功。这类模型能够进行大规模的机器学习训练,实现较高的鲁棒性和识别性能。然而训练和评估过程中卷积运算巨大的计算开销,也限制了其在诸多实际场景中进一步的应用。利用数学理论和并行技术对卷积检测模型进行算法和硬件的双重加速。在算法层面,通过将空间域中的卷积运算转换为频率域中的点乘运算来降低计算复杂度;而在硬件层面,利用 GPU 并行技术可以进一步减少计算时间。在 PASCAL VOC数据集上的实验结果表明,相对于多核 CPU,该算法能够实现在单个商用 GPU 上加速卷积过程2.13~4.31倍。

  19. ESTIMATING LOSS SEVERITY DISTRIBUTION: CONVOLUTION APPROACH

    Directory of Open Access Journals (Sweden)

    Ro J. Pak

    2014-01-01

    Full Text Available Financial loss can be classified into two types such as expected loss and unexpected loss. A current definition seeks to separate two losses from a total loss. In this article, however, we redefine a total loss as the sum of expected and unexpended losses; then the distribution of loss can be considered as the convolution of the distributions of both expected and unexpended losses. We propose to use a convolution of normal and exponential distribution for modelling a loss distribution. Subsequently, we compare its performance with other commonly used loss distributions. The examples of property insurance claim data are analyzed to show the applicability of this normal-exponential convolution model. Overall, we claim that the proposed model provides further useful information with regard to losses compared to existing models. We are able to provide new statistical quantities which are very critical and useful.

  20. Convolution Operators on Groups

    CERN Document Server

    Derighetti, Antoine

    2011-01-01

    This volume is devoted to a systematic study of the Banach algebra of the convolution operators of a locally compact group. Inspired by classical Fourier analysis we consider operators on Lp spaces, arriving at a description of these operators and Lp versions of the theorems of Wiener and Kaplansky-Helson.

  1. Univalence for convolutions

    Directory of Open Access Journals (Sweden)

    Herb Silverman

    1996-01-01

    Full Text Available The radius of univalence is found for the convolution f∗g of functions f∈S (normalized univalent functions and g∈C (close-to-convex functions. A lower bound for the radius of univalence is also determined when f and g range over all of S. Finally, a characterization of C provides an inclusion relationship.

  2. Interpolation by two-dimensional cubic convolution

    Science.gov (United States)

    Shi, Jiazheng; Reichenbach, Stephen E.

    2003-08-01

    This paper presents results of image interpolation with an improved method for two-dimensional cubic convolution. Convolution with a piecewise cubic is one of the most popular methods for image reconstruction, but the traditional approach uses a separable two-dimensional convolution kernel that is based on a one-dimensional derivation. The traditional, separable method is sub-optimal for the usual case of non-separable images. The improved method in this paper implements the most general non-separable, two-dimensional, piecewise-cubic interpolator with constraints for symmetry, continuity, and smoothness. The improved method of two-dimensional cubic convolution has three parameters that can be tuned to yield maximal fidelity for specific scene ensembles characterized by autocorrelation or power-spectrum. This paper illustrates examples for several scene models (a circular disk of parametric size, a square pulse with parametric rotation, and a Markov random field with parametric spatial detail) and actual images -- presenting the optimal parameters and the resulting fidelity for each model. In these examples, improved two-dimensional cubic convolution is superior to several other popular small-kernel interpolation methods.

  3. Correction for scatter and septal penetration using convolution subtraction methods and model-based compensation in {sup 123}I brain SPECT imaging-a Monte Carlo study

    Energy Technology Data Exchange (ETDEWEB)

    Larsson, Anne [Department of Radiation Sciences, Radiation Physics, Umeaa University, SE-901 87 Umeaa (Sweden); Ljungberg, Michael [Medical Radiation Physics, Department of Clinical Sciences, Lund, Lund University, SE-221 85 Lund (Sweden); Mo, Susanna Jakobson [Department of Radiation Sciences, Diagnostic Radiology, Umeaa University, SE-901 87 Umeaa (Sweden); Riklund, Katrine [Department of Radiation Sciences, Diagnostic Radiology, Umeaa University, SE-901 87 Umeaa (Sweden); Johansson, Lennart [Department of Radiation Sciences, Radiation Physics, Umeaa University, SE-901 87 Umeaa (Sweden)

    2006-11-21

    Scatter and septal penetration deteriorate contrast and quantitative accuracy in single photon emission computed tomography (SPECT). In this study four different correction techniques for scatter and septal penetration are evaluated for {sup 123}I brain SPECT. One of the methods is a form of model-based compensation which uses the effective source scatter estimation (ESSE) for modelling scatter, and collimator-detector response (CDR) including both geometric and penetration components. The other methods, which operate on the 2D projection images, are convolution scatter subtraction (CSS) and two versions of transmission dependent convolution subtraction (TDCS), one of them proposed by us. This method uses CSS for correction for septal penetration, with a separate kernel, and TDCS for scatter correction. The corrections are evaluated for a dopamine transporter (DAT) study and a study of the regional cerebral blood flow (rCBF), performed with {sup 123}I. The images are produced using a recently developed Monte Carlo collimator routine added to the program SIMIND which can include interactions in the collimator. The results show that the method included in the iterative reconstruction is preferable to the other methods and that the new TDCS version gives better results compared with the other 2D methods.

  4. Correction for scatter and septal penetration using convolution subtraction methods and model-based compensation in 123I brain SPECT imaging-a Monte Carlo study.

    Science.gov (United States)

    Larsson, Anne; Ljungberg, Michael; Mo, Susanna Jakobson; Riklund, Katrine; Johansson, Lennart

    2006-11-21

    Scatter and septal penetration deteriorate contrast and quantitative accuracy in single photon emission computed tomography (SPECT). In this study four different correction techniques for scatter and septal penetration are evaluated for 123I brain SPECT. One of the methods is a form of model-based compensation which uses the effective source scatter estimation (ESSE) for modelling scatter, and collimator-detector response (CDR) including both geometric and penetration components. The other methods, which operate on the 2D projection images, are convolution scatter subtraction (CSS) and two versions of transmission dependent convolution subtraction (TDCS), one of them proposed by us. This method uses CSS for correction for septal penetration, with a separate kernel, and TDCS for scatter correction. The corrections are evaluated for a dopamine transporter (DAT) study and a study of the regional cerebral blood flow (rCBF), performed with 123I. The images are produced using a recently developed Monte Carlo collimator routine added to the program SIMIND which can include interactions in the collimator. The results show that the method included in the iterative reconstruction is preferable to the other methods and that the new TDCS version gives better results compared with the other 2D methods.

  5. Correction for scatter and septal penetration using convolution subtraction methods and model-based compensation in 123I brain SPECT imaging—a Monte Carlo study

    Science.gov (United States)

    Larsson, Anne; Ljungberg, Michael; Jakobson Mo, Susanna; Riklund, Katrine; Johansson, Lennart

    2006-11-01

    Scatter and septal penetration deteriorate contrast and quantitative accuracy in single photon emission computed tomography (SPECT). In this study four different correction techniques for scatter and septal penetration are evaluated for 123I brain SPECT. One of the methods is a form of model-based compensation which uses the effective source scatter estimation (ESSE) for modelling scatter, and collimator-detector response (CDR) including both geometric and penetration components. The other methods, which operate on the 2D projection images, are convolution scatter subtraction (CSS) and two versions of transmission dependent convolution subtraction (TDCS), one of them proposed by us. This method uses CSS for correction for septal penetration, with a separate kernel, and TDCS for scatter correction. The corrections are evaluated for a dopamine transporter (DAT) study and a study of the regional cerebral blood flow (rCBF), performed with 123I. The images are produced using a recently developed Monte Carlo collimator routine added to the program SIMIND which can include interactions in the collimator. The results show that the method included in the iterative reconstruction is preferable to the other methods and that the new TDCS version gives better results compared with the other 2D methods.

  6. Invariant Scattering Convolution Networks

    CERN Document Server

    Bruna, Joan

    2012-01-01

    A wavelet scattering network computes a translation invariant image representation, which is stable to deformations and preserves high frequency information for classification. It cascades wavelet transform convolutions with non-linear modulus and averaging operators. The first network layer outputs SIFT-type descriptors whereas the next layers provide complementary invariant information which improves classification. The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having the same Fourier power spectrum. State of the art classification results are obtained for handwritten digits and texture discrimination, using a Gaussian kernel SVM and a generative PCA classifier.

  7. The convolution transform

    CERN Document Server

    Hirschman, Isidore Isaac

    2005-01-01

    In studies of general operators of the same nature, general convolution transforms are immediately encountered as the objects of inversion. The relation between differential operators and integral transforms is the basic theme of this work, which is geared toward upper-level undergraduates and graduate students. It may be read easily by anyone with a working knowledge of real and complex variable theory. Topics include the finite and non-finite kernels, variation diminishing transforms, asymptotic behavior of kernels, real inversion theory, representation theory, the Weierstrass transform, and

  8. Convolutional Goppa codes defined on fibrations

    CERN Document Server

    Curto, J I Iglesias; Martín, F J Plaza; Sotelo, G Serrano

    2010-01-01

    We define a new class of Convolutional Codes in terms of fibrations of algebraic varieties generalizaing our previous constructions of Convolutional Goppa Codes. Using this general construction we can give several examples of Maximum Distance Separable (MDS) Convolutional Codes.

  9. Convolutional coding techniques for data protection

    Science.gov (United States)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  10. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-04-11

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaickingand 4D light field view synthesis.

  11. Strongly-MDS convolutional codes

    NARCIS (Netherlands)

    Gluesing-Luerssen, H; Rosenthal, J; Smarandache, R

    2006-01-01

    Maximum-distance separable (MDS) convolutional codes have the property that their free distance is maximal among all codes of the same rate and the same degree. In this paper, a class of MDS convolutional codes is introduced whose column distances reach the generalized Singleton bound at the earlies

  12. Separating Underdetermined Convolutive Speech Mixtures

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Wang, DeLiang; Larsen, Jan

    2006-01-01

    a method for underdetermined blind source separation of convolutive mixtures. The proposed framework is applicable for separation of instantaneous as well as convolutive speech mixtures. It is possible to iteratively extract each speech signal from the mixture by combining blind source separation...

  13. Two-dimensional cubic convolution.

    Science.gov (United States)

    Reichenbach, Stephen E; Geng, Frank

    2003-01-01

    The paper develops two-dimensional (2D), nonseparable, piecewise cubic convolution (PCC) for image interpolation. Traditionally, PCC has been implemented based on a one-dimensional (1D) derivation with a separable generalization to two dimensions. However, typical scenes and imaging systems are not separable, so the traditional approach is suboptimal. We develop a closed-form derivation for a two-parameter, 2D PCC kernel with support [-2,2] x [-2,2] that is constrained for continuity, smoothness, symmetry, and flat-field response. Our analyses, using several image models, including Markov random fields, demonstrate that the 2D PCC yields small improvements in interpolation fidelity over the traditional, separable approach. The constraints on the derivation can be relaxed to provide greater flexibility and performance.

  14. Topological convolution algebras

    CERN Document Server

    Alpay, Daniel

    2012-01-01

    In this paper we introduce a new family of topological convolution algebras of the form $\\bigcup_{p\\in\\mathbb N} L_2(S,\\mu_p)$, where $S$ is a Borel semi-group in a locally compact group $G$, which carries an inequality of the type $\\|f*g\\|_p\\le A_{p,q}\\|f\\|_q\\|g\\|_p$ for $p > q+d$ where $d$ pre-assigned, and $A_{p,q}$ is a constant. We give a sufficient condition on the measures $\\mu_p$ for such an inequality to hold. We study the functional calculus and the spectrum of the elements of these algebras, and present two examples, one in the setting of non commutative stochastic distributions, and the other related to Dirichlet series.

  15. Multipath Convolutional-Recursive Neural Networks for Object Recognition

    OpenAIRE

    2014-01-01

    Part 8: Pattern Recognition; International audience; Extracting good representations from images is essential for many computer vision tasks. While progress in deep learning shows the importance of learning hierarchical features, it is also important to learn features through multiple paths. This paper presents Multipath Convolutional-Recursive Neural Networks(M-CRNNs), a novel scheme which aims to learn image features from multiple paths using models based on combination of convolutional and...

  16. Blind recognition of punctured convolutional codes

    Institute of Scientific and Technical Information of China (English)

    LU Peizhong; LI Shen; ZOU Yan; LUO Xiangyang

    2005-01-01

    This paper presents an algorithm for blind recognition of punctured convolutional codes which is an important problem in adaptive modulation and coding. For a given finite sequence of convolutional code, the parity check matrix of the convolutional code is first computed by solving a linear system with adequate error tolerance. Then a minimal basic encoding matrix of the original convolutional code and its puncturing pattern are determined according to the known parity check matrix of the punctured convolutional code.

  17. Convolutional Neural Network Based dem Super Resolution

    Science.gov (United States)

    Chen, Zixuan; Wang, Xuewen; Xu, Zekai; Hou, Wenguang

    2016-06-01

    DEM super resolution is proposed in our previous publication to improve the resolution for a DEM on basis of some learning examples. Meanwhile, the nonlocal algorithm is introduced to deal with it and lots of experiments show that the strategy is feasible. In our publication, the learning examples are defined as the partial original DEM and their related high measurements due to this way can avoid the incompatibility between the data to be processed and the learning examples. To further extent the applications of this new strategy, the learning examples should be diverse and easy to obtain. Yet, it may cause the problem of incompatibility and unrobustness. To overcome it, we intend to investigate a convolutional neural network based method. The input of the convolutional neural network is a low resolution DEM and the output is expected to be its high resolution one. A three layers model will be adopted. The first layer is used to detect some features from the input, the second integrates the detected features to some compressed ones and the final step transforms the compressed features as a new DEM. According to this designed structure, some learning DEMs will be taken to train it. Specifically, the designed network will be optimized by minimizing the error of the output and its expected high resolution DEM. In practical applications, a testing DEM will be input to the convolutional neural network and a super resolution will be obtained. Many experiments show that the CNN based method can obtain better reconstructions than many classic interpolation methods.

  18. A LATENT TRAINING MODEL OF CONVOLUTIONAL NEURAL NETWORKS FOR PEDESTRIAN DETECTION%一种用于行人检测的隐式训练卷积神经网络模型

    Institute of Scientific and Technical Information of China (English)

    黄咨; 刘琦; 陈致远; 赵宇明

    2016-01-01

    Pedestrian detection has become one of the hot research topics in various social fields.Convolutional neural networks have excellent learning ability.The characteristics of targets learned by these networks are more natural and more conducive to distinguishing different targets.However,traditional convolutional neural network models have to process entire target.Meanwhile,all the training samples need to be pre-labelled correctly,these hamper the development of convolutional neural network models.In this paper,we propose a convolutional neural network-based latent training model.The model reduces the computation complexity by integrating multiple part detection modules and learns the targets classification rules from unlabelled samples by adopting a latent training method.In the paper we also propose a two-stage learning scheme to overlay the size of the network step by step.Evaluation of the tests on public static pedestrian detection dataset,INRIA Person Dataset[1],demonstrates that our model achieves 98% of detection accuracy and 95% of average precision.%行人检测已经成为社会各领域里的热门研究课题之一。卷积神经网络 CNNs(Convolutional neural networks)良好的学习能力使其学习得到的目标特征更自然,更有利于区分不同目标。但传统的卷积神经网络模型需要对整体目标进行处理,同时要求所有训练样本预先正确标注,这些阻碍了卷积神经网络模型的发展。提出一种基于卷积神经网络的隐式训练模型,该模型通过结合多部件检测模块降低计算复杂度,并采用隐式学习方法从未标注的样本中学习目标的分类规则。还提出一种两段式学习方案来逐步叠加网络的规模。在公共的静态行人检测库 INRIA[1]上的试验评测中,所提模型获得98%的检测准确率和95%的平均准确率。

  19. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.

    Science.gov (United States)

    Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L

    2017-04-27

    In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

  20. Matrix convolution operators on groups

    CERN Document Server

    Chu, Cho-Ho

    2008-01-01

    In the last decade, convolution operators of matrix functions have received unusual attention due to their diverse applications. This monograph presents some new developments in the spectral theory of these operators. The setting is the Lp spaces of matrix-valued functions on locally compact groups. The focus is on the spectra and eigenspaces of convolution operators on these spaces, defined by matrix-valued measures. Among various spectral results, the L2-spectrum of such an operator is completely determined and as an application, the spectrum of a discrete Laplacian on a homogeneous graph is computed using this result. The contractivity properties of matrix convolution semigroups are studied and applications to harmonic functions on Lie groups and Riemannian symmetric spaces are discussed. An interesting feature is the presence of Jordan algebraic structures in matrix-harmonic functions.

  1. Convolution Models with Shift-invariant kernel based on Matlab-GPU platform for Fast Acoustic Imaging

    OpenAIRE

    Chu, Ning; Gac, Nicolas; Picheral, José; Mohammad-Djafari, Ali

    2014-01-01

    International audience; Acoustic imaging is an advanced technique for acoustic source localization and power reconstruc-tion from limited noisy measurements at microphone sensors. This technique not only involves in a forward model of acoustic propagation from sources to sensors, but also its numerical solution of an ill-posed inverse problem. Nowadays, the Bayesian inference methods in inverse methods have been widely investigated for robust acoustic imaging, but most of Bayesian methods are...

  2. A REMARK ON CERTAIN CONVOLUTION OPERATOR

    Institute of Scientific and Technical Information of China (English)

    刘金林

    1993-01-01

    A certain operator D(a+p-1) defined by convolutions (or Hadamard products) is introduced. The object of this paper is to give an application of the convolution operator D(a+p-1) to the differential inequalities.

  3. Fast Algorithms for Convolutional Neural Networks

    OpenAIRE

    Lavin, Andrew; Gray, Scott

    2015-01-01

    Deep convolutional neural networks take GPU days of compute time to train on large data sets. Pedestrian detection for self driving cars requires very low latency. Image recognition for mobile phones is constrained by limited processing resources. The success of convolutional neural networks in these situations is limited by how fast we can compute them. Conventional FFT based convolution is fast for large filters, but state of the art convolutional neural networks use small, 3x3 filters. We ...

  4. Engineering Multirate Convolutions for Radar Imaging

    NARCIS (Netherlands)

    Bierens, L.H.J.; Deprettere, E.F.

    1996-01-01

    We present a schematic design methodology for multirate convolution systems, based on combined algorithmic development and architecture design. It allows us to map the algebraic specification of a long convolution algorithm directly onto efficient fast convolution hardware based on short FFT process

  5. Reed-Solomon convolutional codes

    NARCIS (Netherlands)

    Gluesing-Luerssen, H; Schmale, W

    2005-01-01

    In this paper we will introduce a specific class of cyclic convolutional codes. The construction is based on Reed-Solomon block codes. The algebraic parameters as well as the distance of these codes are determined. This shows that some of these codes are optimal or near optimal.

  6. Zebrafish tracking using convolutional neural networks

    Science.gov (United States)

    XU, Zhiping; Cheng, Xi En

    2017-01-01

    Keeping identity for a long term after occlusion is still an open problem in the video tracking of zebrafish-like model animals, and accurate animal trajectories are the foundation of behaviour analysis. We utilize the highly accurate object recognition capability of a convolutional neural network (CNN) to distinguish fish of the same congener, even though these animals are indistinguishable to the human eye. We used data augmentation and an iterative CNN training method to optimize the accuracy for our classification task, achieving surprisingly accurate trajectories of zebrafish of different size and age zebrafish groups over different time spans. This work will make further behaviour analysis more reliable. PMID:28211462

  7. Zebrafish tracking using convolutional neural networks

    Science.gov (United States)

    Xu, Zhiping; Cheng, Xi En

    2017-02-01

    Keeping identity for a long term after occlusion is still an open problem in the video tracking of zebrafish-like model animals, and accurate animal trajectories are the foundation of behaviour analysis. We utilize the highly accurate object recognition capability of a convolutional neural network (CNN) to distinguish fish of the same congener, even though these animals are indistinguishable to the human eye. We used data augmentation and an iterative CNN training method to optimize the accuracy for our classification task, achieving surprisingly accurate trajectories of zebrafish of different size and age zebrafish groups over different time spans. This work will make further behaviour analysis more reliable.

  8. Detection of phase transition via convolutional neural network

    CERN Document Server

    Tanaka, Akinori

    2016-01-01

    We design a Convolutional Neural Network (CNN) which studies correlation between discretized inverse temperature and spin configuration of 2D Ising model and show that it can find a feature of the phase transition without teaching any a priori information for it. We also define a new order parameter via the CNN and show that it provides well approximated critical inverse temperature. In addition, we compare the activation functions for convolution layer and find that the Rectified Linear Unit (ReLU) is important to detect the phase transition of 2D Ising model.

  9. Brain and art: illustrations of the cerebral convolutions. A review.

    Science.gov (United States)

    Lazić, D; Marinković, S; Tomić, I; Mitrović, D; Starčević, A; Milić, I; Grujičić, M; Marković, B

    2014-08-01

    Aesthetics and functional significance of the cerebral cortical relief gave us the idea to find out how often the convolutions are presented in fine art, and in which techniques, conceptual meaning and pathophysiological aspect. We examined 27,614 art works created by 2,856 authors and presented in art literature, and in Google images search. The cerebral gyri were shown in 0.85% of the art works created by 2.35% of the authors. The concept of the brain was first mentioned in ancient Egypt some 3,700 years ago. The first artistic drawing of the convolutions was made by Leonardo da Vinci, and the first colour picture by an unknown Italian author. Rembrandt van Rijn was the first to paint the gyri. Dozens of modern authors, who are professional artists, medical experts or designers, presented the cerebralc onvolutions in drawings, paintings, digital works or sculptures, with various aesthetic, symbolic and metaphorical connotation. Some artistic compositions and natural forms show a gyral pattern. The convolutions, whose cortical layers enable the cognitive functions, can be affected by various disorders. Some artists suffered from those disorders, and some others presented them in their artworks. The cerebral convolutions or gyri, thanks to their extensive cortical mantle, are the specific morphological basis for the human mind, but also the structures with their own aesthetics. Contemporary authors relatively often depictor model the cerebral convolutions, either from the aesthetic or conceptual aspect. In this way, they make a connection between the neuroscience and fineart.

  10. Quasi-Convolution Pyramidal Blurring

    OpenAIRE

    Kraus, Martin

    2008-01-01

    Efficient image blurring techniques based on the pyramid algorithm can be implemented on modern graphics hardware; thus, image blurring with arbitrary blur width is possible in real time even for large images. However, pyramidal blurring methods do not achieve the image quality provided by convolution filters; in particular, the shape of the corresponding filter kernel varies locally, which potentially results in objectionable rendering artifacts. In this work, a new analysis filter is design...

  11. Keypoint Density-Based Region Proposal for Fine-Grained Object Detection and Classification Using Regions with Convolutional Neural Network Features

    Science.gov (United States)

    2015-12-15

    convolution, activation functions, and pooling. For a model trained on classes, the output from the classification layer comprises + 1...Keypoint Density-based Region Proposal for Fine-Grained Object Detection and Classification using Regions with Convolutional Neural Network...Convolutional Neural Networks (CNNs) enable them to outperform conventional techniques on standard object detection and classification tasks, their

  12. Unsupervised pre-training for fully convolutional neural networks

    NARCIS (Netherlands)

    Wiehman, Stiaan; Kroon, Steve; Villiers, De Hendrik

    2017-01-01

    Unsupervised pre-training of neural networks has been shown to act as a regularization technique, improving performance and reducing model variance. Recently, fully convolutional networks (FCNs) have shown state-of-the-art results on various semantic segmentation tasks. Unfortunately, there is no ef

  13. Review of the convolution algorithm for evaluating service integrated systems

    DEFF Research Database (Denmark)

    Iversen, Villy Bæk

    1997-01-01

    In this paper we give a review of the applicability of the convolution algorithm. By this we are able to evaluate communication networks end--to--end with e.g. BPP multi-ratetraffic models insensitive to the holding time distribution. Rearrangement, minimum allocation, and maximum allocation are ...

  14. Compressed imaging by sparse random convolution.

    Science.gov (United States)

    Marcos, Diego; Lasser, Theo; López, Antonio; Bourquard, Aurélien

    2016-01-25

    The theory of compressed sensing (CS) shows that signals can be acquired at sub-Nyquist rates if they are sufficiently sparse or compressible. Since many images bear this property, several acquisition models have been proposed for optical CS. An interesting approach is random convolution (RC). In contrast with single-pixel CS approaches, RC allows for the parallel capture of visual information on a sensor array as in conventional imaging approaches. Unfortunately, the RC strategy is difficult to implement as is in practical settings due to important contrast-to-noise-ratio (CNR) limitations. In this paper, we introduce a modified RC model circumventing such difficulties by considering measurement matrices involving sparse non-negative entries. We then implement this model based on a slightly modified microscopy setup using incoherent light. Our experiments demonstrate the suitability of this approach for dealing with distinct CS scenarii, including 1-bit CS.

  15. Robust Convolutional Neural Networks for Image Recognition

    Directory of Open Access Journals (Sweden)

    Hayder M. Albeahdili

    2015-11-01

    Full Text Available Recently image recognition becomes vital task using several methods. One of the most interesting used methods is using Convolutional Neural Network (CNN. It is widely used for this purpose. However, since there are some tasks that have small features that are considered an essential part of a task, then classification using CNN is not efficient because most of those features diminish before reaching the final stage of classification. In this work, analyzing and exploring essential parameters that can influence model performance. Furthermore different elegant prior contemporary models are recruited to introduce new leveraging model. Finally, a new CNN architecture is proposed which achieves state-of-the-art classification results on the different challenge benchmarks. The experimented are conducted on MNIST, CIFAR-10, and CIFAR-100 datasets. Experimental results showed that the results outperform and achieve superior results comparing to the most contemporary approaches.

  16. Convolutive Blind Source Separation Methods

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Larsen, Jan; Kjems, Ulrik

    2008-01-01

    During the past decades, much attention has been given to the separation of mixed sources, in particular for the blind case where both the sources and the mixing process are unknown and only recordings of the mixtures are available. In several situations it is desirable to recover all sources from....... This may help practitioners and researchers new to the area of convolutive source separation obtain a complete overview of the field. Hopefully those with more experience in the field can identify useful tools, or find inspiration for new algorithms....

  17. Cantilever tilt causing amplitude related convolution in dynamic mode atomic force microscopy.

    Science.gov (United States)

    Wang, Chunmei; Sun, Jielin; Itoh, Hiroshi; Shen, Dianhong; Hu, Jun

    2011-01-01

    It is well known that the topography in atomic force microscopy (AFM) is a convolution of the tip's shape and the sample's geometry. The classical convolution model was established in contact mode assuming a static probe, but it is no longer valid in dynamic mode AFM. It is still not well understood whether or how the vibration of the probe in dynamic mode affects the convolution. Such ignorance complicates the interpretation of the topography. Here we propose a convolution model for dynamic mode by taking into account the typical design of the cantilever tilt in AFMs, which leads to a different convolution from that in contact mode. Our model indicates that the cantilever tilt results in a dynamic convolution affected by the absolute value of the amplitude, especially in the case that corresponding contact convolution has sharp edges beyond certain angle. The effect was experimentally demonstrated by a perpendicular SiO(2)/Si super-lattice structure. Our model is useful for quantitative characterizations in dynamic mode, especially in probe characterization and critical dimension measurements.

  18. Real-time rendering of optical effects using spatial convolution

    Science.gov (United States)

    Rokita, Przemyslaw

    1998-03-01

    Simulation of special effects such as: defocus effect, depth-of-field effect, raindrops or water film falling on the windshield, may be very useful in visual simulators and in all computer graphics applications that need realistic images of outdoor scenery. Those effects are especially important in rendering poor visibility conditions in flight and driving simulators, but can also be applied, for example, in composing computer graphics and video sequences- -i.e. in Augmented Reality systems. This paper proposes a new approach to the rendering of those optical effects by iterative adaptive filtering using spatial convolution. The advantage of this solution is that the adaptive convolution can be done in real-time by existing hardware. Optical effects mentioned above can be introduced into the image computed using conventional camera model by applying to the intensity of each pixel the convolution filter having an appropriate point spread function. The algorithms described in this paper can be easily implemented int the visualization pipeline--the final effect may be obtained by iterative filtering using a single hardware convolution filter or with the pipeline composed of identical 3 X 3 filters placed as the stages of this pipeline. Another advantage of the proposed solution is that the extension based on proposed algorithm can be added to the existing rendering systems as a final stage of the visualization pipeline.

  19. Numerical simulation of seismic wave propagation in complex media by convolutional differentiator

    Institute of Scientific and Technical Information of China (English)

    LI Xin-fu; LI Xiao-fan

    2008-01-01

    We apply the forward modeling algorithm constituted by the convolutional Forsyte polynomial differentiator pro- posed by former worker to seismic wave simulation of complex heterogeneous media and compare the efficiency and accuracy between this method and other seismic simulation methods such as finite difference and pseudospec- tral method. Numerical experiments demonstrate that the algorithm constituted by convolutional Forsyte polyno- mial differentiator has high efficiency and accuracy and needs less computational resources, so it is a numerical modeling method with much potential.

  20. Convolutional Neural Network for Image Recognition

    CERN Document Server

    Seifnashri, Sahand

    2015-01-01

    The aim of this project is to use machine learning techniques especially Convolutional Neural Networks for image processing. These techniques can be used for Quark-Gluon discrimination using calorimeters data, but unfortunately I didn’t manage to get the calorimeters data and I just used the Jet data fromminiaodsim(ak4 chs). The Jet data was not good enough for Convolutional Neural Network which is designed for ’image’ recognition. This report is made of twomain part, part one is mainly about implementing Convolutional Neural Network on unphysical data such as MNIST digits and CIFAR-10 dataset and part 2 is about the Jet data.

  1. The analysis of VERITAS muon images using convolutional neural networks

    CERN Document Server

    Feng, Qi

    2016-01-01

    Imaging atmospheric Cherenkov telescopes (IACTs) are sensitive to rare gamma-ray photons, buried in the background of charged cosmic-ray (CR) particles, the flux of which is several orders of magnitude greater. The ability to separate gamma rays from CR particles is important, as it is directly related to the sensitivity of the instrument. This gamma-ray/CR-particle classification problem in IACT data analysis can be treated with the rapidly-advancing machine learning algorithms, which have the potential to outperform the traditional box-cut methods on image parameters. We present preliminary results of a precise classification of a small set of muon events using a convolutional neural networks model with the raw images as input features. We also show the possibility of using the convolutional neural networks model for regression problems, such as the radius and brightness measurement of muon events, which can be used to calibrate the throughput efficiency of IACTs.

  2. The analysis of VERITAS muon images using convolutional neural networks

    Science.gov (United States)

    Feng, Qi; Lin, Tony T. Y.; VERITAS Collaboration

    2017-06-01

    Imaging atmospheric Cherenkov telescopes (IACTs) are sensitive to rare gamma-ray photons, buried in the background of charged cosmic-ray (CR) particles, the flux of which is several orders of magnitude greater. The ability to separate gamma rays from CR particles is important, as it is directly related to the sensitivity of the instrument. This gamma-ray/CR-particle classification problem in IACT data analysis can be treated with the rapidly-advancing machine learning algorithms, which have the potential to outperform the traditional box-cut methods on image parameters. We present preliminary results of a precise classification of a small set of muon events using a convolutional neural networks model with the raw images as input features. We also show the possibility of using the convolutional neural networks model for regression problems, such as the radius and brightness measurement of muon events, which can be used to calibrate the throughput efficiency of IACTs.

  3. Infimal Convolution Regularisation Functionals of BV and Lp Spaces

    KAUST Repository

    Burger, Martin

    2016-02-03

    We study a general class of infimal convolution type regularisation functionals suitable for applications in image processing. These functionals incorporate a combination of the total variation seminorm and Lp norms. A unified well-posedness analysis is presented and a detailed study of the one-dimensional model is performed, by computing exact solutions for the corresponding denoising problem and the case p=2. Furthermore, the dependency of the regularisation properties of this infimal convolution approach to the choice of p is studied. It turns out that in the case p=2 this regulariser is equivalent to the Huber-type variant of total variation regularisation. We provide numerical examples for image decomposition as well as for image denoising. We show that our model is capable of eliminating the staircasing effect, a well-known disadvantage of total variation regularisation. Moreover as p increases we obtain almost piecewise affine reconstructions, leading also to a better preservation of hat-like structures.

  4. Application of structured support vector machine backpropagation to a convolutional neural network for human pose estimation.

    Science.gov (United States)

    Witoonchart, Peerajak; Chongstitvatana, Prabhas

    2017-08-01

    In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. PROPERTIES OF THE CONVOLUTION WITH PRESTARLIKE FUNCTIONS

    Institute of Scientific and Technical Information of China (English)

    Jacek DZIOK

    2013-01-01

    In the paper we investigate convolution properties related to the prestarlike functions and various inclusion relationships between defined classes of functions. Interest-ing applications involving the well-known classes of functions defined by linear operators are also considered.

  6. Inf-convolution of G-expectations

    Institute of Scientific and Technical Information of China (English)

    BUCKDAHN; Rainer

    2010-01-01

    In this paper we will discuss the optimal risk transfer problems when risk measures are generated by G-expectations,and we present the relationship between inf-convolution of G-expectations and the infconvolution of drivers G.

  7. Convolution kernels for multi-wavelength imaging

    National Research Council Canada - National Science Library

    Boucaud, Alexandre; Bocchio, Marco; Abergel, Alain; Orieux, François; Dole, Hervé; Hadj-Youcef, Mohamed Amine

    2016-01-01

    .... Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been...

  8. Learning text representation using recurrent convolutional neural network with highway layers

    OpenAIRE

    Wen, Ying; Zhang, Weinan; Luo, Rui; Wang, Jun

    2016-01-01

    Recently, the rapid development of word embedding and neural networks has brought new inspiration to various NLP and IR tasks. In this paper, we describe a staged hybrid model combining Recurrent Convolutional Neural Networks (RCNN) with highway layers. The highway network module is incorporated in the middle takes the output of the bi-directional Recurrent Neural Network (Bi-RNN) module in the first stage and provides the Convolutional Neural Network (CNN) module in the last stage with the i...

  9. Multiscale Convolutional Neural Networks for Hand Detection

    Directory of Open Access Journals (Sweden)

    Shiyang Yan

    2017-01-01

    Full Text Available Unconstrained hand detection in still images plays an important role in many hand-related vision problems, for example, hand tracking, gesture analysis, human action recognition and human-machine interaction, and sign language recognition. Although hand detection has been extensively studied for decades, it is still a challenging task with many problems to be tackled. The contributing factors for this complexity include heavy occlusion, low resolution, varying illumination conditions, different hand gestures, and the complex interactions between hands and objects or other hands. In this paper, we propose a multiscale deep learning model for unconstrained hand detection in still images. Deep learning models, and deep convolutional neural networks (CNNs in particular, have achieved state-of-the-art performances in many vision benchmarks. Developed from the region-based CNN (R-CNN model, we propose a hand detection scheme based on candidate regions generated by a generic region proposal algorithm, followed by multiscale information fusion from the popular VGG16 model. Two benchmark datasets were applied to validate the proposed method, namely, the Oxford Hand Detection Dataset and the VIVA Hand Detection Challenge. We achieved state-of-the-art results on the Oxford Hand Detection Dataset and had satisfactory performance in the VIVA Hand Detection Challenge.

  10. Contour Detection Using Cost-Sensitive Convolutional Neural Networks

    OpenAIRE

    Hwang, Jyh-Jing; Liu, Tyng-Luh

    2014-01-01

    We address the problem of contour detection via per-pixel classifications of edge point. To facilitate the process, the proposed approach leverages with DenseNet, an efficient implementation of multiscale convolutional neural networks (CNNs), to extract an informative feature vector for each pixel and uses an SVM classifier to accomplish contour detection. The main challenge lies in adapting a pre-trained per-image CNN model for yielding per-pixel image features. We propose to base on the Den...

  11. Gradient Flow Convolutive Blind Source Separation

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Nielsen, Chinton Møller

    2004-01-01

    Experiments have shown that the performance of instantaneous gradient flow beamforming by Cauwenberghs et al. is reduced significantly in reverberant conditions. By expanding the gradient flow principle to convolutive mixtures, separation in a reverberant environment is possible. By use of a circ......Experiments have shown that the performance of instantaneous gradient flow beamforming by Cauwenberghs et al. is reduced significantly in reverberant conditions. By expanding the gradient flow principle to convolutive mixtures, separation in a reverberant environment is possible. By use...

  12. A guide to convolution arithmetic for deep learning

    OpenAIRE

    Dumoulin, Vincent; Visin, Francesco

    2016-01-01

    We introduce a guide to help deep learning practitioners understand and manipulate convolutional neural network architectures. The guide clarifies the relationship between various properties (input shape, kernel shape, zero padding, strides and output shape) of convolutional, pooling and transposed convolutional layers, as well as the relationship between convolutional and transposed convolutional layers. Relationships are derived for various cases, and are illustrated in order to make them i...

  13. Nuclear norm regularized convolutional Max Pos@Top machine

    KAUST Repository

    Li, Qinfeng

    2016-11-18

    In this paper, we propose a novel classification model for the multiple instance data, which aims to maximize the number of positive instances ranked before the top-ranked negative instances. This method belongs to a recently emerged performance, named as Pos@Top. Our proposed classification model has a convolutional structure that is composed by four layers, i.e., the convolutional layer, the activation layer, the max-pooling layer and the full connection layer. In this paper, we propose an algorithm to learn the convolutional filters and the full connection weights to maximize the Pos@Top measure over the training set. Also, we try to minimize the rank of the filter matrix to explore the low-dimensional space of the instances in conjunction with the classification results. The rank minimization is conducted by the nuclear norm minimization of the filter matrix. In addition, we develop an iterative algorithm to solve the corresponding problem. We test our method on several benchmark datasets. The experimental results show the superiority of our method compared with other state-of-the-art Pos@Top maximization methods.

  14. Deep Convolutional Neural Network for Inverse Problems in Imaging

    Science.gov (United States)

    Jin, Kyong Hwan; McCann, Michael T.; Froustey, Emmanuel; Unser, Michael

    2017-09-01

    In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyper parameter selection. The starting point of our work is the observation that unrolled iterative methods have the form of a CNN (filtering followed by point-wise non-linearity) when the normal operator (H*H, the adjoint of H times H) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill-posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a 512 x 512 image on GPU.

  15. A Fast Numerical Method for Max-Convolution and the Application to Efficient Max-Product Inference in Bayesian Networks.

    Science.gov (United States)

    Serang, Oliver

    2015-08-01

    Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.

  16. Event Discrimination using Convolutional Neural Networks

    Science.gov (United States)

    Menon, Hareesh; Hughes, Richard; Daling, Alec; Winer, Brian

    2017-01-01

    Convolutional Neural Networks (CNNs) are computational models that have been shown to be effective at classifying different types of images. We present a method to use CNNs to distinguish events involving the production of a top quark pair and a Higgs boson from events involving the production of a top quark pair and several quark and gluon jets. To do this, we generate and simulate data using MADGRAPH and DELPHES for a general purpose LHC detector at 13 TeV. We produce images using a particle flow algorithm by binning the particles geometrically based on their position in the detector and weighting the bins by the energy of each particle within each bin, and by defining channels based on particle types (charged track, neutral hadronic, neutral EM, lepton, heavy flavor). Our classification results are competitive with standard machine learning techniques. We have also looked into the classification of the substructure of the events, in a process known as scene labeling. In this context, we look for the presence of boosted objects (such as top quarks) with substructure encompassed within single jets. Preliminary results on substructure classification will be presented.

  17. Do Convolutional Neural Networks Learn Class Hierarchy?

    Science.gov (United States)

    Alsallakh, Bilal; Jourabloo, Amin; Ye, Mao; Liu, Xiaoming; Ren, Liu

    2017-08-29

    Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data.

  18. Medical image fusion using the convolution of Meridian distributions.

    Science.gov (United States)

    Agrawal, Mayank; Tsakalides, Panagiotis; Achim, Alin

    2010-01-01

    The aim of this paper is to introduce a novel non-Gaussian statistical model-based approach for medical image fusion based on the Meridian distribution. The paper also includes a new approach to estimate the parameters of generalized Cauchy distribution. The input images are first decomposed using the Dual-Tree Complex Wavelet Transform (DT-CWT) with the subband coefficients modelled as Meridian random variables. Then, the convolution of Meridian distributions is applied as a probabilistic prior to model the fused coefficients, and the weights used to combine the source images are optimised via Maximum Likelihood (ML) estimation. The superior performance of the proposed method is demonstrated using medical images.

  19. Uncertainty estimation by convolution using spatial statistics.

    Science.gov (United States)

    Sanchez-Brea, Luis Miguel; Bernabeu, Eusebio

    2006-10-01

    Kriging has proven to be a useful tool in image processing since it behaves, under regular sampling, as a convolution. Convolution kernels obtained with kriging allow noise filtering and include the effects of the random fluctuations of the experimental data and the resolution of the measuring devices. The uncertainty at each location of the image can also be determined using kriging. However, this procedure is slow since, currently, only matrix methods are available. In this work, we compare the way kriging performs the uncertainty estimation with the standard statistical technique for magnitudes without spatial dependence. As a result, we propose a much faster technique, based on the variogram, to determine the uncertainty using a convolutional procedure. We check the validity of this approach by applying it to one-dimensional images obtained in diffractometry and two-dimensional images obtained by shadow moire.

  20. Astronomical Image Subtraction by Cross-Convolution

    Science.gov (United States)

    Yuan, Fang; Akerlof, Carl W.

    2008-04-01

    In recent years, there has been a proliferation of wide-field sky surveys to search for a variety of transient objects. Using relatively short focal lengths, the optics of these systems produce undersampled stellar images often marred by a variety of aberrations. As participants in such activities, we have developed a new algorithm for image subtraction that no longer requires high-quality reference images for comparison. The computational efficiency is comparable with similar procedures currently in use. The general technique is cross-convolution: two convolution kernels are generated to make a test image and a reference image separately transform to match as closely as possible. In analogy to the optimization technique for generating smoothing splines, the inclusion of an rms width penalty term constrains the diffusion of stellar images. In addition, by evaluating the convolution kernels on uniformly spaced subimages across the total area, these routines can accommodate point-spread functions that vary considerably across the focal plane.

  1. The Urbanik Generalized Convolutions in the Non-Commutative Probability and a Forgotten Method of Constructing Generalized Convolution

    Indian Academy of Sciences (India)

    Barbara Jasiulis-Gołdyn; Anna Kula

    2012-08-01

    The paper deals with the notions of weak stability and weak generalized convolution with respect to a generalized convolution, introduced by Kucharczak and Urbanik. We study properties of such objects and give examples of weakly stable measures with respect to the Kendall convolution. Moreover, we show that in the context of non-commutative probability, two operations: the -convolution and the (,1)-convolution satisfy the Urbanik’s conditions for a generalized convolution, interpreted on the set of moment sequences. The weak stability reveals the relation between two operations.

  2. Colonoscopic polyp detection using convolutional neural networks

    Science.gov (United States)

    Park, Sun Young; Sargent, Dusty

    2016-03-01

    Computer aided diagnosis (CAD) systems for medical image analysis rely on accurate and efficient feature extraction methods. Regardless of which type of classifier is used, the results will be limited if the input features are not diagnostically relevant and do not properly discriminate between the different classes of images. Thus, a large amount of research has been dedicated to creating feature sets that capture the salient features that physicians are able to observe in the images. Successful feature extraction reduces the semantic gap between the physician's interpretation and the computer representation of images, and helps to reduce the variability in diagnosis between physicians. Due to the complexity of many medical image classification tasks, feature extraction for each problem often requires domainspecific knowledge and a carefully constructed feature set for the specific type of images being classified. In this paper, we describe a method for automatic diagnostic feature extraction from colonoscopy images that may have general application and require a lower level of domain-specific knowledge. The work in this paper expands on our previous CAD algorithm for detecting polyps in colonoscopy video. In that work, we applied an eigenimage model to extract features representing polyps, normal tissue, diverticula, etc. from colonoscopy videos taken from various viewing angles and imaging conditions. Classification was performed using a conditional random field (CRF) model that accounted for the spatial and temporal adjacency relationships present in colonoscopy video. In this paper, we replace the eigenimage feature descriptor with features extracted from a convolutional neural network (CNN) trained to recognize the same image types in colonoscopy video. The CNN-derived features show greater invariance to viewing angles and image quality factors when compared to the eigenimage model. The CNN features are used as input to the CRF classifier as before. We report

  3. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  4. Spectral classification using convolutional neural networks

    CERN Document Server

    Hála, Pavel

    2014-01-01

    There is a great need for accurate and autonomous spectral classification methods in astrophysics. This thesis is about training a convolutional neural network (ConvNet) to recognize an object class (quasar, star or galaxy) from one-dimension spectra only. Author developed several scripts and C programs for datasets preparation, preprocessing and postprocessing of the data. EBLearn library (developed by Pierre Sermanet and Yann LeCun) was used to create ConvNets. Application on dataset of more than 60000 spectra yielded success rate of nearly 95%. This thesis conclusively proved great potential of convolutional neural networks and deep learning methods in astrophysics.

  5. SAR ATR Based on Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Tian Zhuangzhuang

    2016-06-01

    Full Text Available This study presents a new method of Synthetic Aperture Radar (SAR image target recognition based on a convolutional neural network. First, we introduce a class separability measure into the cost function to improve this network’s ability to distinguish between categories. Then, we extract SAR image features using the improved convolutional neural network and classify these features using a support vector machine. Experimental results using moving and stationary target acquisition and recognition SAR datasets prove the validity of this method.

  6. On a Generalized Hankel Type Convolution of Generalized Functions

    Indian Academy of Sciences (India)

    S P Malgonde; G S Gaikawad

    2001-11-01

    The classical generalized Hankel type convolution are defined and extended to a class of generalized functions. Algebraic properties of the convolution are explained and the existence and significance of an identity element are discussed.

  7. Semantic segmentation of bioimages using convolutional neural networks

    CSIR Research Space (South Africa)

    Wiehman, S

    2016-07-01

    Full Text Available Convolutional neural networks have shown great promise in both general image segmentation problems as well as bioimage segmentation. In this paper, the application of different convolutional network architectures is explored on the C. elegans live...

  8. A note on maximal estimates for stochastic convolutions

    NARCIS (Netherlands)

    Veraar, M.; Weis, L.

    2011-01-01

    In stochastic partial differential equations it is important to have pathwise regularity properties of stochastic convolutions. In this note we present a new sufficient condition for the pathwise continuity of stochastic convolutions in Banach spaces.

  9. Continuous speech recognition based on convolutional neural network

    Science.gov (United States)

    Zhang, Qing-qing; Liu, Yong; Pan, Jie-lin; Yan, Yong-hong

    2015-07-01

    Convolutional Neural Networks (CNNs), which showed success in achieving translation invariance for many image processing tasks, are investigated for continuous speech recognitions in the paper. Compared to Deep Neural Networks (DNNs), which have been proven to be successful in many speech recognition tasks nowadays, CNNs can reduce the NN model sizes significantly, and at the same time achieve even better recognition accuracies. Experiments on standard speech corpus TIMIT showed that CNNs outperformed DNNs in the term of the accuracy when CNNs had even smaller model size.

  10. Parallel Multi Channel Convolution using General Matrix Multiplication

    OpenAIRE

    VASUDEVAN, ARAVIND; Anderson, Andrew; Gregg, David

    2017-01-01

    Convolutional neural networks (CNNs) have emerged as one of the most successful machine learning technologies for image and video processing. The most computationally intensive parts of CNNs are the convolutional layers, which convolve multi-channel images with multiple kernels. A common approach to implementing convolutional layers is to expand the image into a column matrix (im2col) and perform Multiple Channel Multiple Kernel (MCMK) convolution using an existing parallel General Matrix Mul...

  11. Convolutional Neural Networks for SAR Image Segmentation

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Nobel-Jørgensen, Morten

    2015-01-01

    Segmentation of Synthetic Aperture Radar (SAR) images has several uses, but it is a difficult task due to a number of properties related to SAR images. In this article we show how Convolutional Neural Networks (CNNs) can easily be trained for SAR image segmentation with good results. Besides...

  12. Discrete Fresnel Transform and Its Circular Convolution

    CERN Document Server

    Ouyang, Xing; Gunning, Fatima; Zhang, Hongyu; Guan, Yong Liang

    2015-01-01

    Discrete trigonometric transformations, such as the discrete Fourier and cosine/sine transforms, are important in a variety of applications due to their useful properties. For example, one well-known property is the convolution theorem for Fourier transform. In this letter, we derive a discrete Fresnel transform (DFnT) from the infinitely periodic optical gratings, as a linear trigonometric transform. Compared to the previous formulations of DFnT, the DFnT in this letter has no degeneracy, which hinders its mathematic applications, due to destructive interferences. The circular convolution property of the DFnT is studied for the first time. It is proved that the DFnT of a circular convolution of two sequences equals either one circularly convolving with the DFnT of the other. As circular convolution is a fundamental process in discrete systems, the DFnT not only gives the coefficients of the Talbot image, but can also be useful for optical and digital signal processing and numerical evaluation of the Fresnel ...

  13. Properties of derivations on some convolution algebras

    DEFF Research Database (Denmark)

    Pedersen, Thomas Vils

    2014-01-01

    For all convolution algebras L1[0; 1); L1 loc and A(!) = T n L1(!n), the derivations are of the form Dμf = Xf μ for suitable measures μ, where (Xf)(t) = tf(t). We describe the (weakly) compact as well as the (weakly) Montel derivations on these algebras in terms of properties of the measure μ...

  14. Epileptiform spike detection via convolutional neural networks

    DEFF Research Database (Denmark)

    Johansen, Alexander Rosenberg; Jin, Jing; Maszczyk, Tomasz

    2016-01-01

    The EEG of epileptic patients often contains sharp waveforms called "spikes", occurring between seizures. Detecting such spikes is crucial for diagnosing epilepsy. In this paper, we develop a convolutional neural network (CNN) for detecting spikes in EEG of epileptic patients in an automated fash...

  15. Quasi-cyclic unit memory convolutional codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Paaske, Erik; Ballan, Mark

    1990-01-01

    Unit memory convolutional codes with generator matrices, which are composed of circulant submatrices, are introduced. This structure facilitates the analysis of efficient search for good codes. Equivalences among such codes and some of the basic structural properties are discussed. In particular...

  16. Convolutions with the Continuous Primitive Integral

    Directory of Open Access Journals (Sweden)

    Erik Talvila

    2009-01-01

    I⊂ℝ. When g∈L1, the estimate is ‖f∗g‖≤‖f‖‖g‖1. There are results on differentiation and integration of convolutions. A type of Fubini theorem is proved for the continuous primitive integral.

  17. Learning Contextual Dependence With Convolutional Hierarchical Recurrent Neural Networks

    Science.gov (United States)

    Zuo, Zhen; Shuai, Bing; Wang, Gang; Liu, Xiao; Wang, Xingxing; Wang, Bing; Chen, Yushi

    2016-07-01

    Existing deep convolutional neural networks (CNNs) have shown their great success on image classification. CNNs mainly consist of convolutional and pooling layers, both of which are performed on local image areas without considering the dependencies among different image regions. However, such dependencies are very important for generating explicit image representation. In contrast, recurrent neural networks (RNNs) are well known for their ability of encoding contextual information among sequential data, and they only require a limited number of network parameters. General RNNs can hardly be directly applied on non-sequential data. Thus, we proposed the hierarchical RNNs (HRNNs). In HRNNs, each RNN layer focuses on modeling spatial dependencies among image regions from the same scale but different locations. While the cross RNN scale connections target on modeling scale dependencies among regions from the same location but different scales. Specifically, we propose two recurrent neural network models: 1) hierarchical simple recurrent network (HSRN), which is fast and has low computational cost; and 2) hierarchical long-short term memory recurrent network (HLSTM), which performs better than HSRN with the price of more computational cost. In this manuscript, we integrate CNNs with HRNNs, and develop end-to-end convolutional hierarchical recurrent neural networks (C-HRNNs). C-HRNNs not only make use of the representation power of CNNs, but also efficiently encodes spatial and scale dependencies among different image regions. On four of the most challenging object/scene image classification benchmarks, our C-HRNNs achieve state-of-the-art results on Places 205, SUN 397, MIT indoor, and competitive results on ILSVRC 2012.

  18. Coronary artery calcification (CAC) classification with deep convolutional neural networks

    Science.gov (United States)

    Liu, Xiuming; Wang, Shice; Deng, Yufeng; Chen, Kuan

    2017-03-01

    Coronary artery calcification (CAC) is a typical marker of the coronary artery disease, which is one of the biggest causes of mortality in the U.S. This study evaluates the feasibility of using a deep convolutional neural network (DCNN) to automatically detect CAC in X-ray images. 1768 posteroanterior (PA) view chest X-Ray images from Sichuan Province Peoples Hospital, China were collected retrospectively. Each image is associated with a corresponding diagnostic report written by a trained radiologist (907 normal, 861 diagnosed with CAC). Onequarter of the images were randomly selected as test samples; the rest were used as training samples. DCNN models consisting of 2,4,6 and 8 convolutional layers were designed using blocks of pre-designed CNN layers. Each block was implemented in Theano with Graphics Processing Units (GPU). Human-in-the-loop learning was also performed on a subset of 165 images with framed arteries by trained physicians. The results from the DCNN models were compared to the diagnostic reports. The average diagnostic accuracies for models with 2,4,6,8 layers were 0.85, 0.87, 0.88, and 0.89 respectively. The areas under the curve (AUC) were 0.92, 0.95, 0.95, and 0.96. As the model grows deeper, the AUC or diagnostic accuracies did not have statistically significant changes. The results of this study indicate that DCNN models have promising potential in the field of intelligent medical image diagnosis practice.

  19. A fast convolution-based methodology to simulate 2-D/3-D cardiac ultrasound images.

    Science.gov (United States)

    Gao, Hang; Choi, Hon Fai; Claus, Piet; Boonen, Steven; Jaecques, Siegfried; Van Lenthe, G Harry; Van der Perre, Georges; Lauriks, Walter; D'hooge, Jan

    2009-02-01

    This paper describes a fast convolution-based methodology for simulating ultrasound images in a 2-D/3-D sector format as typically used in cardiac ultrasound. The conventional convolution model is based on the assumption of a space-invariant point spread function (PSF) and typically results in linear images. These characteristics are not representative for cardiac data sets. The spatial impulse response method (IRM) has excellent accuracy in the linear domain; however, calculation time can become an issue when scatterer numbers become significant and when 3-D volumetric data sets need to be computed. As a solution to these problems, the current manuscript proposes a new convolution-based methodology in which the data sets are produced by reducing the conventional 2-D/3-D convolution model to multiple 1-D convolutions (one for each image line). As an example, simulated 2-D/3-D phantom images are presented along with their gray scale histogram statistics. In addition, the computation time is recorded and contrasted to a commonly used implementation of IRM (Field II). It is shown that COLE can produce anatomically plausible images with local Rayleigh statistics but at improved calculation time (1200 times faster than the reference method).

  20. 基于深度时空域卷积神经网络的表情识别模型%Facial expression recognition model based on deep spatiotemporal convolutional neural networks

    Institute of Scientific and Technical Information of China (English)

    杨格兰; 邓晓军; 刘琮

    2016-01-01

    Considering that the feature extraction is crucial phases in the process of facial recognition, and it incorporates manual intervention that hinders the development of reliable and accurate algorithms, in order to describe facial expression in a data-driven fashion, a temporal extension of convolutional neural network was developed to exploit dynamics of facial expressions and improve performance. The model was fundamental on the multiplicative interactions between convolutional outputs, instead of summing filter responses, and the responses were multiplied. The developed approach was capable of extracting features not only relevant to facial motion, but also sensitive to the appearance and texture of the face. The introduction of hierarchical structure from deep learning makes the approach learn the high-level and global features. The end to end training strategy optimizes all the parameters under the uniform objective. The results show that the approach extracts the two types of features simultaneously as natural outcome of the developed architecture. The learnt fitters are similar to the receptive field area of visual cortex. The model is proved to be effective.%基于特征抽取是表情识别算法中的重要步骤,但是现有算法依赖手工设计特征且适应性差等问题,提出基于深度时空域卷积神经网络的表情识别模型,采用数据驱动策略直接从表情视频中自动抽取时空域中的动静态特征。使用新颖的卷积滤波器响应积替代权重和,使得模型能同时抽取到动态特征和静态特征。引入深度学习的多层设计,使得模型能逐层学习到更抽象、更宏观的特征。采用端对端的有监督学习策略,使得所有参数在同一目标函数下优化。研究结果表明:训练后的卷积核类似于Garbor滤波器的形态,这与视觉皮层细胞对激励的响应相似;该模型能对表情视频进行更准确分类;通过与其他几种近年出现

  1. Convolution theorems: partitioning the space of integral transforms

    Science.gov (United States)

    Lindsey, Alan R.; Suter, Bruce W.

    1999-03-01

    Investigating a number of different integral transforms uncovers distinct patterns in the type of translation convolution theorems afforded by each. It is shown that transforms based on separable kernels (aka Fourier, Laplace and their relatives) have a form of the convolution theorem providing for a transform domain product of the convolved functions. However, transforms based on kernels not separable in the function and transform variables mandate a convolution theorem of a different type; namely in the transform domain the convolution becomes another convolution--one function with the transform of the other.

  2. Convolutional neural network architectures for predicting DNA–protein binding

    Science.gov (United States)

    Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.

    2016-01-01

    Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608

  3. Decoding of Convolutional Codes over the Erasure Channel

    CERN Document Server

    Tomás, Virtudes; Smarandache, Roxana

    2010-01-01

    In this paper we study the decoding capabilities of convolutional codes over the erasure channel. Of special interest will be maximum distance profile (MDP) convolutional codes. These are codes which have a maximum possible column distance increase. We show how this strong minimum distance condition of MDP convolutional codes help us to solve error situations that maximum distance separable (MDS) block codes fail to solve. Towards this goal, we define two subclasses of MDP codes: reverse-MDP convolutional codes and complete-MDP convolutional codes. Reverse-MDP codes have the capability to recover a maximum number of erasures using an algorithm which runs backward in time. Complete-MDP convolutional codes are both MDP and reverse-MDP codes. They are capable to recover the state of the decoder under the mildest condition. We show that complete-MDP convolutional codes perform in certain sense better than MDS block codes of the same rate over the erasure channel.

  4. Precise two-dimensional D-bar reconstructions of human chest and phantom tank via sinc-convolution algorithm

    Directory of Open Access Journals (Sweden)

    Abbasi Mahdi

    2012-06-01

    Full Text Available Abstract Background Electrical Impedance Tomography (EIT is used as a fast clinical imaging technique for monitoring the health of the human organs such as lungs, heart, brain and breast. Each practical EIT reconstruction algorithm should be efficient enough in terms of convergence rate, and accuracy. The main objective of this study is to investigate the feasibility of precise empirical conductivity imaging using a sinc-convolution algorithm in D-bar framework. Methods At the first step, synthetic and experimental data were used to compute an intermediate object named scattering transform. Next, this object was used in a two-dimensional integral equation which was precisely and rapidly solved via sinc-convolution algorithm to find the square root of the conductivity for each pixel of image. For the purpose of comparison, multigrid and NOSER algorithms were implemented under a similar setting. Quality of reconstructions of synthetic models was tested against GREIT approved quality measures. To validate the simulation results, reconstructions of a phantom chest and a human lung were used. Results Evaluation of synthetic reconstructions shows that the quality of sinc-convolution reconstructions is considerably better than that of each of its competitors in terms of amplitude response, position error, ringing, resolution and shape-deformation. In addition, the results confirm near-exponential and linear convergence rates for sinc-convolution and multigrid, respectively. Moreover, the least degree of relative errors and the most degree of truth were found in sinc-convolution reconstructions from experimental phantom data. Reconstructions of clinical lung data show that the related physiological effect is well recovered by sinc-convolution algorithm. Conclusions Parametric evaluation demonstrates the efficiency of sinc-convolution to reconstruct accurate conductivity images from experimental data. Excellent results in phantom and clinical

  5. Deep Convolutional Neural Networks for large-scale speech tasks.

    Science.gov (United States)

    Sainath, Tara N; Kingsbury, Brian; Saon, George; Soltau, Hagen; Mohamed, Abdel-rahman; Dahl, George; Ramabhadran, Bhuvana

    2015-04-01

    Convolutional Neural Networks (CNNs) are an alternative type of neural network that can be used to reduce spectral variations and model spectral correlations which exist in signals. Since speech signals exhibit both of these properties, we hypothesize that CNNs are a more effective model for speech compared to Deep Neural Networks (DNNs). In this paper, we explore applying CNNs to large vocabulary continuous speech recognition (LVCSR) tasks. First, we determine the appropriate architecture to make CNNs effective compared to DNNs for LVCSR tasks. Specifically, we focus on how many convolutional layers are needed, what is an appropriate number of hidden units, what is the best pooling strategy. Second, investigate how to incorporate speaker-adapted features, which cannot directly be modeled by CNNs as they do not obey locality in frequency, into the CNN framework. Third, given the importance of sequence training for speech tasks, we introduce a strategy to use ReLU+dropout during Hessian-free sequence training of CNNs. Experiments on 3 LVCSR tasks indicate that a CNN with the proposed speaker-adapted and ReLU+dropout ideas allow for a 12%-14% relative improvement in WER over a strong DNN system, achieving state-of-the art results in these 3 tasks.

  6. A convolutional neural network neutrino event classifier

    Science.gov (United States)

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  7. Transition Mean Values of Shifted Convolution Sums

    CERN Document Server

    Petrow, Ian

    2011-01-01

    Let f be a classical holomorphic cusp form for SL_2(Z) of weight k which is a normalized eigenfunction for the Hecke algebra, and let \\lambda(n) be its eigenvalues. In this paper we study "shifted convolution sums" of the eigenvalues \\lambda(n) after averaging over many shifts h and obtain asymptotic estimates. The result is somewhat surprising: one encounters a transition region depending on the ratio of the square of the length of the average over h to the length of the shifted convolution sum. The phenomenon is similar to that encountered by Conrey, Farmer and Soundararajan in their 2000 paper Transition Mean Values of Real Characters, and the connection of both results to Eisenstein series and multiple Dirichlet series is discussed.

  8. A Convolutional Neural Network Neutrino Event Classifier

    CERN Document Server

    Aurisano, A; Rocco, D; Himmel, A; Messier, M D; Niner, E; Pawloski, G; Psihas, F; Sousa, A; Vahle, P

    2016-01-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  9. Rational Convolution Roots of Isobaric Polynomials

    OpenAIRE

    Conci, Aura; Li, Huilan; MacHenry, Trueman

    2014-01-01

    In this paper, we exhibit two matrix representations of the rational roots of generalized Fibonacci polynomials (GFPs) under convolution product, in terms of determinants and permanents, respectively. The underlying root formulas for GFPs and for weighted isobaric polynomials (WIPs), which appeared in an earlier paper by MacHenry and Tudose, make use of two types of operators. These operators are derived from the generating functions for Stirling numbers of the first kind and second kind. Hen...

  10. Multichannel Convolutional Neural Network for Biological Relation Extraction

    Science.gov (United States)

    Quan, Chanqin; Sun, Xiao; Bai, Wenjun

    2016-01-01

    The plethora of biomedical relations which are embedded in medical logs (records) demands researchers' attention. Previous theoretical and practical focuses were restricted on traditional machine learning techniques. However, these methods are susceptible to the issues of “vocabulary gap” and data sparseness and the unattainable automation process in feature extraction. To address aforementioned issues, in this work, we propose a multichannel convolutional neural network (MCCNN) for automated biomedical relation extraction. The proposed model has the following two contributions: (1) it enables the fusion of multiple (e.g., five) versions in word embeddings; (2) the need for manual feature engineering can be obviated by automated feature learning with convolutional neural network (CNN). We evaluated our model on two biomedical relation extraction tasks: drug-drug interaction (DDI) extraction and protein-protein interaction (PPI) extraction. For DDI task, our system achieved an overall f-score of 70.2% compared to the standard linear SVM based system (e.g., 67.0%) on DDIExtraction 2013 challenge dataset. And for PPI task, we evaluated our system on Aimed and BioInfer PPI corpus; our system exceeded the state-of-art ensemble SVM system by 2.7% and 5.6% on f-scores. PMID:28053977

  11. Convolution approach to the piNN system

    CERN Document Server

    Blankleider, B

    1994-01-01

    The unitary NN-piNN model contains a serious theoretical flaw: unitarity is obtained at the price of having to use an effective piNN coupling constant that is smaller than the experimental one. This is but one aspect of a more general renormalization problem whose origin lies in the truncation of Hilbert space used to derive the equations. Here we present a new theoretical approach to the piNN problem where unitary equations are obtained without having to truncate Hilbert space. Indeed, the only approximation made is the neglect of connected three-body forces. As all possible dressings of one-particle propagators and vertices are retained in our model, we overcome the renormalization problems inherent in previous piNN theories. The key element of our derivation is the use of convolution integrals that have enabled us to sum all the possible disconnected time-ordered graphs. We also discuss how the convolution method can be extended to sum all the time orderings of a connected graph. This has enabled us to cal...

  12. Classification of Histology Sections via Multispectral Convolutional Sparse Coding.

    Science.gov (United States)

    Zhou, Yin; Chang, Hang; Barner, Kenneth; Spellman, Paul; Parvin, Bahram

    2014-06-01

    Image-based classification of histology sections plays an important role in predicting clinical outcomes. However this task is very challenging due to the presence of large technical variations (e.g., fixation, staining) and biological heterogeneities (e.g., cell type, cell state). In the field of biomedical imaging, for the purposes of visualization and/or quantification, different stains are typically used for different targets of interest (e.g., cellular/subcellular events), which generates multi-spectrum data (images) through various types of microscopes and, as a result, provides the possibility of learning biological-component-specific features by exploiting multispectral information. We propose a multispectral feature learning model that automatically learns a set of convolution filter banks from separate spectra to efficiently discover the intrinsic tissue morphometric signatures, based on convolutional sparse coding (CSC). The learned feature representations are then aggregated through the spatial pyramid matching framework (SPM) and finally classified using a linear SVM. The proposed system has been evaluated using two large-scale tumor cohorts, collected from The Cancer Genome Atlas (TCGA). Experimental results show that the proposed model 1) outperforms systems utilizing sparse coding for unsupervised feature learning (e.g., PSD-SPM [5]); 2) is competitive with systems built upon features with biological prior knowledge (e.g., SMLSPM [4]).

  13. Multiple deep convolutional neural networks averaging for face alignment

    Science.gov (United States)

    Zhang, Shaohua; Yang, Hua; Yin, Zhouping

    2015-05-01

    Face alignment is critical for face recognition, and the deep learning-based method shows promise for solving such issues, given that competitive results are achieved on benchmarks with additional benefits, such as dispensing with handcrafted features and initial shape. However, most existing deep learning-based approaches are complicated and quite time-consuming during training. We propose a compact face alignment method for fast training without decreasing its accuracy. Rectified linear unit is employed, which allows all networks approximately five times faster convergence than a tanh neuron. An eight learnable layer deep convolutional neural network (DCNN) based on local response normalization and a padding convolutional layer (PCL) is designed to provide reliable initial values during prediction. A model combination scheme is presented to further reduce errors, while showing that only two network architectures and hyperparameter selection procedures are required in our approach. A three-level cascaded system is ultimately built based on the DCNNs and model combination mode. Extensive experiments validate the effectiveness of our method and demonstrate comparable accuracy with state-of-the-art methods on BioID, labeled face parts in the wild, and Helen datasets.

  14. Convolutional Neural Network Based Fault Detection for Rotating Machinery

    Science.gov (United States)

    Janssens, Olivier; Slavkovikj, Viktor; Vervisch, Bram; Stockman, Kurt; Loccufier, Mia; Verstockt, Steven; Van de Walle, Rik; Van Hoecke, Sofie

    2016-09-01

    Vibration analysis is a well-established technique for condition monitoring of rotating machines as the vibration patterns differ depending on the fault or machine condition. Currently, mainly manually-engineered features, such as the ball pass frequencies of the raceway, RMS, kurtosis an crest, are used for automatic fault detection. Unfortunately, engineering and interpreting such features requires a significant level of human expertise. To enable non-experts in vibration analysis to perform condition monitoring, the overhead of feature engineering for specific faults needs to be reduced as much as possible. Therefore, in this article we propose a feature learning model for condition monitoring based on convolutional neural networks. The goal of this approach is to autonomously learn useful features for bearing fault detection from the data itself. Several types of bearing faults such as outer-raceway faults and lubrication degradation are considered, but also healthy bearings and rotor imbalance are included. For each condition, several bearings are tested to ensure generalization of the fault-detection system. Furthermore, the feature-learning based approach is compared to a feature-engineering based approach using the same data to objectively quantify their performance. The results indicate that the feature-learning system, based on convolutional neural networks, significantly outperforms the classical feature-engineering based approach which uses manually engineered features and a random forest classifier. The former achieves an accuracy of 93.61 percent and the latter an accuracy of 87.25 percent.

  15. Transforming Musical Signals through a Genre Classifying Convolutional Neural Network

    Science.gov (United States)

    Geng, S.; Ren, G.; Ogihara, M.

    2017-05-01

    Convolutional neural networks (CNNs) have been successfully applied on both discriminative and generative modeling for music-related tasks. For a particular task, the trained CNN contains information representing the decision making or the abstracting process. One can hope to manipulate existing music based on this 'informed' network and create music with new features corresponding to the knowledge obtained by the network. In this paper, we propose a method to utilize the stored information from a CNN trained on musical genre classification task. The network was composed of three convolutional layers, and was trained to classify five-second song clips into five different genres. After training, randomly selected clips were modified by maximizing the sum of outputs from the network layers. In addition to the potential of such CNNs to produce interesting audio transformation, more information about the network and the original music could be obtained from the analysis of the generated features since these features indicate how the network 'understands' the music.

  16. Classifications of multispectral colorectal cancer tissues using convolution neural network

    Directory of Open Access Journals (Sweden)

    Hawraa Haj-Hassan

    2017-01-01

    Full Text Available Background: Colorectal cancer (CRC is the third most common cancer among men and women. Its diagnosis in early stages, typically done through the analysis of colon biopsy images, can greatly improve the chances of a successful treatment. This paper proposes to use convolution neural networks (CNNs to predict three tissue types related to the progression of CRC: benign hyperplasia (BH, intraepithelial neoplasia (IN, and carcinoma (Ca. Methods: Multispectral biopsy images of thirty CRC patients were retrospectively analyzed. Images of tissue samples were divided into three groups, based on their type (10 BH, 10 IN, and 10 Ca. An active contour model was used to segment image regions containing pathological tissues. Tissue samples were classified using a CNN containing convolution, max-pooling, and fully-connected layers. Available tissue samples were split into a training set, for learning the CNN parameters, and test set, for evaluating its performance. Results: An accuracy of 99.17% was obtained from segmented image regions, outperforming existing approaches based on traditional feature extraction, and classification techniques. Conclusions: Experimental results demonstrate the effectiveness of CNN for the classification of CRC tissue types, in particular when using presegmented regions of interest.

  17. Long-term Recurrent Convolutional Networks for Visual Recognition and Description

    Science.gov (United States)

    2014-11-17

    scription task only requires a single convolutional network since the input consists of a single image. A variety of deep and multi- modal models [8...architectures for video description (see Figure 4). For each architecture, we assume we have predictions of objects, subjects, and verbs present in the video from

  18. The convolution theorem for two-dimensional continuous wavelet transform

    Institute of Scientific and Technical Information of China (English)

    ZHANG CHI

    2013-01-01

    In this paper , application of two -dimensional continuous wavelet transform to image processes is studied. We first show that the convolution and correlation of two continuous wavelets satisfy the required admissibility and regularity conditions ,and then we derive the convolution and correlation theorem for two-dimensional continuous wavelet transform. Finally, we present numerical example showing the usefulness of applying the convolution theorem for two -dimensional continuous wavelet transform to perform image restoration in the presence of additive noise.

  19. An Algorithm for the Convolution of Legendre Series

    KAUST Repository

    Hale, Nicholas

    2014-01-01

    An O(N2) algorithm for the convolution of compactly supported Legendre series is described. The algorithm is derived from the convolution theorem for Legendre polynomials and the recurrence relation satisfied by spherical Bessel functions. Combining with previous work yields an O(N 2) algorithm for the convolution of Chebyshev series. Numerical results are presented to demonstrate the improved efficiency over the existing algorithm. © 2014 Society for Industrial and Applied Mathematics.

  20. BERNOULLI CONVOLUTIONS ASSOCIATED WITH CERTAIN NON-PISOT NUMBERS

    Institute of Scientific and Technical Information of China (English)

    Feng Dejun; Wang Yang

    2003-01-01

    The Bernoulli convolution vλ measure is shown to be absolutely continuous with L2 density for almost all 1/2<λ<1,and singular if λ-1 is a Pisot number.It is an open question whether the Pisot type Bernoulli convolutions are the only singular ones.In this paper,we construct a family of non-Pisot type Bernoulli convolutions vλ such that their density functions,if they excist,are not L2.We also construct other Bernolulli convolutions whose density functions,if they exist,behave rather badly.

  1. Convolutions Induced Discrete Probability Distributions and a New Fibonacci Constant

    CERN Document Server

    Rajan, Arulalan; Rao, Vittal; Rao, Ashok

    2010-01-01

    This paper proposes another constant that can be associated with Fibonacci sequence. In this work, we look at the probability distributions generated by the linear convolution of Fibonacci sequence with itself, and the linear convolution of symmetrized Fibonacci sequence with itself. We observe that for a distribution generated by the linear convolution of the standard Fibonacci sequence with itself, the variance converges to 8.4721359... . Also, for a distribution generated by the linear convolution of symmetrized Fibonacci sequences, the variance converges in an average sense to 17.1942 ..., which is approximately twice that we get with common Fibonacci sequence.

  2. Applications of convolution voltammetry in electroanalytical chemistry.

    Science.gov (United States)

    Bentley, Cameron L; Bond, Alan M; Hollenkamp, Anthony F; Mahon, Peter J; Zhang, Jie

    2014-02-18

    The robustness of convolution voltammetry for determining accurate values of the diffusivity (D), bulk concentration (C(b)), and stoichiometric number of electrons (n) has been demonstrated by applying the technique to a series of electrode reactions in molecular solvents and room temperature ionic liquids (RTILs). In acetonitrile, the relatively minor contribution of nonfaradaic current facilitates analysis with macrodisk electrodes, thus moderate scan rates can be used without the need to perform background subtraction to quantify the diffusivity of iodide [D = 1.75 (±0.02) × 10(-5) cm(2) s(-1)] in this solvent. In the RTIL 1-ethyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide, background subtraction is necessary at a macrodisk electrode but can be avoided at a microdisk electrode, thereby simplifying the analytical procedure and allowing the diffusivity of iodide [D = 2.70 (±0.03) × 10(-7) cm(2) s(-1)] to be quantified. Use of a convolutive procedure which simultaneously allows D and nC(b) values to be determined is also demonstrated. Three conditions under which a technique of this kind may be applied are explored and are related to electroactive species which display slow dissolution kinetics, undergo a single multielectron transfer step, or contain multiple noninteracting redox centers using ferrocene in an RTIL, 1,4-dinitro-2,3,5,6-tetramethylbenzene, and an alkynylruthenium trimer, respectively, as examples. The results highlight the advantages of convolution voltammetry over steady-state techniques such as rotating disk electrode voltammetry and microdisk electrode voltammetry, as it is not restricted by the mode of diffusion (planar or radial), hence removing limitations on solvent viscosity, electrode geometry, and voltammetric scan rate.

  3. Convolution neural networks for ship type recognition

    Science.gov (United States)

    Rainey, Katie; Reeder, John D.; Corelli, Alexander G.

    2016-05-01

    Algorithms to automatically recognize ship type from satellite imagery are desired for numerous maritime applications. This task is difficult, and example imagery accurately labeled with ship type is hard to obtain. Convolutional neural networks (CNNs) have shown promise in image recognition settings, but many of these applications rely on the availability of thousands of example images for training. This work attempts to under- stand for which types of ship recognition tasks CNNs might be well suited. We report the results of baseline experiments applying a CNN to several ship type classification tasks, and discuss many of the considerations that must be made in approaching this problem.

  4. Fourier transforms and convolutions for the experimentalist

    CERN Document Server

    Jennison, RC

    1961-01-01

    Fourier Transforms and Convolutions for the Experimentalist provides the experimentalist with a guide to the principles and practical uses of the Fourier transformation. It aims to bridge the gap between the more abstract account of a purely mathematical approach and the rule of thumb calculation and intuition of the practical worker. The monograph springs from a lecture course which the author has given in recent years and for which he has drawn upon a number of sources, including a set of notes compiled by the late Dr. I. C. Browne from a series of lectures given by Mr. J . A. Ratcliffe of t

  5. One dimensional Convolutional Goppa Codes over the projective line

    CERN Document Server

    Pérez, J A Domínguez; Sotelo, G Serrano

    2011-01-01

    We give a general method to construct MDS one-dimensional convolutional codes. Our method generalizes previous constructions of H. Gluesing-Luerssen and B. Langfeld. Moreover we give a classification of one-dimensional Convolutional Goppa Codes and propose a characterization of MDS codes of this type.

  6. Explicit solutions of fractional diffusion equations via Generalized Gamma Convolution

    CERN Document Server

    D'Ovidio, Mirko

    2010-01-01

    In this paper we deal with Mellin convolution of generalized Gamma densities which brings to integrals of modified Bessel functions of the second kind. Such convolutions allow us to write explicitly the solutions of the time-fractional diffusion equations involving the adjoint operators of a square Bessel process and a Bessel process.

  7. FPGA Prototyping of RNN Decoder for Convolutional Codes

    Directory of Open Access Journals (Sweden)

    Salcic Zoran

    2006-01-01

    Full Text Available This paper presents prototyping of a recurrent type neural network (RNN convolutional decoder using system-level design specification and design flow that enables easy mapping to the target FPGA architecture. Implementation and the performance measurement results have shown that an RNN decoder for hard-decision decoding coupled with a simple hard-limiting neuron activation function results in a very low complexity, which easily fits into standard Altera FPGA. Moreover, the design methodology allowed modeling of complete testbed for prototyping RNN decoders in simulation and real-time environment (same FPGA, thus enabling evaluation of BER performance characteristics of the decoder for various conditions of communication channel in real time.

  8. Learning Building Extraction in Aerial Scenes with Convolutional Networks.

    Science.gov (United States)

    Yuan, Jiangye

    2017-09-11

    Extracting buildings from aerial scene images is an important task with many applications. However, this task is highly difficult to automate due to extremely large variations of building appearances, and still heavily relies on manual work. To attack this problem, we design a deep convolutional network with a simple structure that integrates activation from multiple layers for pixel-wise prediction, and introduce the signed distance function of building boundaries as the output representation, which has an enhanced representation power. To train the network, we leverage abundant building footprint data from geographic information systems (GIS) to generate large amounts of labeled data. The trained model achieves a superior performance on datasets that are significantly larger and more complex than those used in prior work, demonstrating that the proposed method provides a promising and scalable solution for automating this labor-intensive task.

  9. Convolution of Lorentz Invariant Ultradistributions and Field Theory

    CERN Document Server

    Bollini, C G

    2003-01-01

    In this work, a general definition of convolution between two arbitrary four dimensional Lorentz invariant (fdLi) Tempered Ultradistributions is given, in both: Minkowskian and Euclidean Space (Spherically symmetric tempered ultradistributions). The product of two arbitrary fdLi distributions of exponential type is defined via the convolution of its corresponding Fourier Transforms. Several examples of convolution of two fdLi Tempered Ultradistributions are given. In particular we calculate exactly the convolution of two Feynman's massless propagators. An expression for the Fourier Transform of a Lorentz invariant Tempered Ultradistribution in terms of modified Bessel distributions is obtained in this work (Generalization of Bochner's formula to Minkowskian space). At the same time, and in a previous step used for the deduction of the convolution formula, we obtain the generalization to the Minkowskian space, of the dimensional regularization of the perturbation theory of Green Functions in the Euclidean conf...

  10. Relationships among transforms, convolutions, and first variations

    Directory of Open Access Journals (Sweden)

    Jeong Gyoo Kim

    1999-01-01

    Full Text Available In this paper, we establish several interesting relationships involving the Fourier-Feynman transform, the convolution product, and the first variation for functionals F on Wiener space of the form F(x=f(〈α1,x〉,…,〈αn,x〉,                                                      (* where 〈αj,x〉 denotes the Paley-Wiener-Zygmund stochastic integral ∫0Tαj(tdx(t.

  11. An exactly solvable self-convolutive recurrence

    CERN Document Server

    Martin, Richard J

    2011-01-01

    We consider a self-convolutive recurrence whose solution is the sequence of coefficients in the asymptotic expansion of the logarithmic derivative of the confluent hypergeometic function $U(a,b,z)$. By application of the Hilbert transform we convert this expression into an explicit, non-recursive solution in which the $n$th coefficient is expressed as the $(n-1)$th moment of a measure, and also as the trace of the $(n-1)$th iterate of a linear operator. Applications of these sequences, and hence of the explicit solution provided, are found in quantum field theory as the number of Feynman diagrams of a certain type and order, in Brownian motion theory, and in combinatorics.

  12. Robust smile detection using convolutional neural networks

    Science.gov (United States)

    Bianco, Simone; Celona, Luigi; Schettini, Raimondo

    2016-11-01

    We present a fully automated approach for smile detection. Faces are detected using a multiview face detector and aligned and scaled using automatically detected eye locations. Then, we use a convolutional neural network (CNN) to determine whether it is a smiling face or not. To this end, we investigate different shallow CNN architectures that can be trained even when the amount of learning data is limited. We evaluate our complete processing pipeline on the largest publicly available image database for smile detection in an uncontrolled scenario. We investigate the robustness of the method to different kinds of geometric transformations (rotation, translation, and scaling) due to imprecise face localization, and to several kinds of distortions (compression, noise, and blur). To the best of our knowledge, this is the first time that this type of investigation has been performed for smile detection. Experimental results show that our proposal outperforms state-of-the-art methods on both high- and low-quality images.

  13. Image quality of mixed convolution kernel in thoracic computed tomography.

    Science.gov (United States)

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  14. Terminated LDPC Convolutional Codes over GF(2^p)

    CERN Document Server

    Uchikawa, Hironori; Sakaniwa, Kohichi

    2010-01-01

    In this paper, we present a construction method of terminated non-binary low-density parity-check (LDPC) convolutional codes. Our construction method is an expansion of Felstrom and Zigangirov construction for non-binary LDPC convolutional codes. The rate-compatibility of the non-binary LDPC convolutional codes is also discussed. The proposed rate-compatible code is designed from one single mother (2,4)-regular non-binary LDPC convolutional code of rate 1/2. Higher-rate codes are produced by puncturing the mother code and lower-rate codes are produced by multiplicatively repeating the mother code. For moderate values of the syndrome former memory, simulation results show that mother non-binary LDPC convolutional code outperform binary LDPC convolutional codes with comparable constraint bit length. And the derived low-rate and high-rate non-binary LDPC convolutional codes exhibit good decoding performance without loss of large gap to the Shannon limits.

  15. The Law of Large Numbers for the Free Multiplicative Convolution

    DEFF Research Database (Denmark)

    Haagerup, Uffe; Möller, Sören

    2013-01-01

    In classical probability the law of large numbers for the multiplicative convolution follows directly from the law for the additive convolution. In free probability this is not the case. The free additive law was proved by D. Voiculescu in 1986 for probability measures with bounded support...... for the case of bounded support. In contrast to the classical multiplicative convolution case, the limit measure for the free multiplicative law of large numbers is not a Dirac measure, unless the original measure is a Dirac measure. We also show that the mean value of lnx is additive with respect to the free...

  16. Convolution kernel design and efficient algorithm for sampling density correction.

    Science.gov (United States)

    Johnson, Kenneth O; Pipe, James G

    2009-02-01

    Sampling density compensation is an important step in non-cartesian image reconstruction. One of the common techniques to determine weights that compensate for differences in sampling density involves a convolution. A new convolution kernel is designed for sampling density attempting to minimize the error in a fully reconstructed image. The resulting weights obtained using this new kernel are compared with various previous methods, showing a reduction in reconstruction error. A computationally efficient algorithm is also presented that facilitates the calculation of the convolution of finite kernels. Both the kernel and the algorithm are extended to 3D. Copyright 2009 Wiley-Liss, Inc.

  17. A Parallel Strategy for Convolutional Neural Network Based on Heterogeneous Cluster for Mobile Information System

    Directory of Open Access Journals (Sweden)

    Jilin Zhang

    2017-01-01

    Full Text Available With the development of the mobile systems, we gain a lot of benefits and convenience by leveraging mobile devices; at the same time, the information gathered by smartphones, such as location and environment, is also valuable for business to provide more intelligent services for customers. More and more machine learning methods have been used in the field of mobile information systems to study user behavior and classify usage patterns, especially convolutional neural network. With the increasing of model training parameters and data scale, the traditional single machine training method cannot meet the requirements of time complexity in practical application scenarios. The current training framework often uses simple data parallel or model parallel method to speed up the training process, which is why heterogeneous computing resources have not been fully utilized. To solve these problems, our paper proposes a delay synchronization convolutional neural network parallel strategy, which leverages the heterogeneous system. The strategy is based on both synchronous parallel and asynchronous parallel approaches; the model training process can reduce the dependence on the heterogeneous architecture in the premise of ensuring the model convergence, so the convolution neural network framework is more adaptive to different heterogeneous system environments. The experimental results show that the proposed delay synchronization strategy can achieve at least three times the speedup compared to the traditional data parallelism.

  18. A Convolution-LSTM-Based Deep Neural Network for Cross-Domain MOOC Forum Post Classification

    Directory of Open Access Journals (Sweden)

    Xiaocong Wei

    2017-07-01

    Full Text Available Learners in a massive open online course often express feelings, exchange ideas and seek help by posting questions in discussion forums. Due to the very high learner-to-instructor ratios, it is unrealistic to expect instructors to adequately track the forums, find all of the issues that need resolution and understand their urgency and sentiment. In this paper, considering the biases among different courses, we propose a transfer learning framework based on a convolutional neural network and a long short-term memory model, called ConvL, to automatically identify whether a post expresses confusion, determine the urgency and classify the polarity of the sentiment. First, we learn the feature representation for each word by considering the local contextual feature via the convolution operation. Second, we learn the post representation from the features extracted through the convolution operation via the LSTM model, which considers the long-term temporal semantic relationships of features. Third, we investigate the possibility of transferring parameters from a model trained on one course to another course and the subsequent fine-tuning. Experiments on three real-world MOOC courses confirm the effectiveness of our framework. This work suggests that our model can potentially significantly increase the effectiveness of monitoring MOOC forums in real time.

  19. Interpolating and filtering decoding algorithm for convolution codes

    Directory of Open Access Journals (Sweden)

    O. O. Shpylka

    2010-01-01

    Full Text Available There has been synthesized interpolating and filtering decoding algorithm for convolution codes on maximum of a posteriori probability criterion, in which combined filtering coder state and interpolation of information signs on sliding interval are processed

  20. FPGA-based digital convolution for wireless applications

    CERN Document Server

    Guan, Lei

    2017-01-01

    This book presents essential perspectives on digital convolutions in wireless communications systems and illustrates their corresponding efficient real-time field-programmable gate array (FPGA) implementations. Covering these digital convolutions from basic concept to vivid simulation/illustration, the book is also supplemented with MS PowerPoint presentations to aid in comprehension. FPGAs or generic all programmable devices will soon become widespread, serving as the “brains” of all types of real-time smart signal processing systems, like smart networks, smart homes and smart cities. The book examines digital convolution by bringing together the following main elements: the fundamental theory behind the mathematical formulae together with corresponding physical phenomena; virtualized algorithm simulation together with benchmark real-time FPGA implementations; and detailed, state-of-the-art case studies on wireless applications, including popular linear convolution in digital front ends (DFEs); nonlinear...

  1. A Shortest Dependency Path Based Convolutional Neural Network for Protein-Protein Relation Extraction

    OpenAIRE

    2016-01-01

    The state-of-the-art methods for protein-protein interaction (PPI) extraction are primarily based on kernel methods, and their performances strongly depend on the handcraft features. In this paper, we tackle PPI extraction by using convolutional neural networks (CNN) and propose a shortest dependency path based CNN (sdpCNN) model. The proposed method (1) only takes the sdp and word embedding as input and (2) could avoid bias from feature selection by using CNN. We performed experiments on sta...

  2. Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.

    Science.gov (United States)

    Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian

    2016-10-01

    In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.

  3. A Deep 3D Convolutional Neural Network Based Design for Manufacturability Framework

    OpenAIRE

    Balu, Aditya; Lore, Kin Gwn; Young, Gavin; Krishnamurthy, Adarsh; Sarkar, Soumik

    2016-01-01

    Deep 3D Convolutional Neural Networks (3D-CNN) are traditionally used for object recognition, video data analytics and human gesture recognition. In this paper, we present a novel application of 3D-CNNs in understanding difficult-to-manufacture features from computer-aided design (CAD) models to develop a decision support tool for cyber-enabled manufacturing. Traditionally, design for manufacturability (DFM) rules are hand-crafted and used to accelerate the engineering product design cycle by...

  4. Fuzzy Logic Module of Convolutional Neural Network for Handwritten Digits Recognition

    Science.gov (United States)

    Popko, E. A.; Weinstein, I. A.

    2016-08-01

    Optical character recognition is one of the important issues in the field of pattern recognition. This paper presents a method for recognizing handwritten digits based on the modeling of convolutional neural network. The integrated fuzzy logic module based on a structural approach was developed. Used system architecture adjusted the output of the neural network to improve quality of symbol identification. It was shown that proposed algorithm was flexible and high recognition rate of 99.23% was achieved.

  5. Approximation of integral operators using product-convolution expansions

    OpenAIRE

    Escande, Paul; Weiss, Pierre

    2016-01-01

    We consider a class of linear integral operators with impulse responses varying regularly in time or space. These operators appear in a large number of applications ranging from signal/image processing to biology. Evaluating their action on functions is a computationally intensive problem necessary for many practical problems. We analyze a technique called product-convolution expansion: the operator is locally approximated by a convolution, allowing to design fast numerical algorithms ba...

  6. Approximation of integral operators using convolution-product expansions

    OpenAIRE

    Escande, Paul; Weiss, Pierre

    2016-01-01

    We consider a class of linear integral operators with impulse responses varying regularly in time or space. These operators appear in a large number of applications ranging from signal/image processing to biology. Evaluating their action on functions is a computation-ally intensive problem necessary for many practical problems. We analyze a technique called convolution-product expansion: the operator is locally approximated by a convolution, allowing to design fast numerical algorithms based ...

  7. Two-dimensional Block of Spatial Convolution Algorithm and Simulation

    OpenAIRE

    Mussa Mohamed Ahmed

    2012-01-01

    This paper proposes an algorithm based on sub image-segmentation strategy. The proposed scheme divides a grayscale image into overlapped 6×6 blocks each of which is segmented into four small 3x3 non-overlapped sub-images. A new spatial approach for efficiently computing 2-dimensional linear convolution or cross-correlation between suitable flipped and fixed filter coefficients (sub image for cross-correlation) and corresponding input sub image is presented. Computation of convolution is itera...

  8. Traffic sign recognition with deep convolutional neural networks

    OpenAIRE

    KARAMATIĆ, BORIS

    2016-01-01

    The problem of detection and recognition of traffic signs is becoming an important problem when it comes to the development of self driving cars and advanced driver assistance systems. In this thesis we will develop a system for detection and recognition of traffic signs. For the problem of detection we will use aggregate channel features and for the problem of recognition we will use a deep convolutional neural network. We will describe how convolutional neural networks work, how they are co...

  9. Metaheuristic Algorithms for Convolution Neural Network

    Science.gov (United States)

    Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738

  10. Convolution kernels for multi-wavelength imaging

    CERN Document Server

    Boucaud, Alexandre; Abergel, Alain; Orieux, François; Dole, Hervé; Hadj-Youcef, Mohamed Amine

    2016-01-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as PSF, that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assumin...

  11. Convolution kernels for multi-wavelength imaging

    Science.gov (United States)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  12. Image reconstruction from incomplete convolution data via total variation regularization

    Directory of Open Access Journals (Sweden)

    Zhida Shen

    2015-02-01

    Full Text Available Variational models with Total Variation (TV regularization have long been known to preserve image edges and produce high quality reconstruction. On the other hand, recent theory on compressive sensing has shown that it is feasible to accurately reconstruct  images from a few linear measurements via TV regularization. However, in general TV models are difficult to solve due to the nondifferentiability and the universal coupling of variables. In this paper, we propose the use of alternating direction method for image reconstruction from highly incomplete convolution data, where an image is reconstructed as a minimizer of an energy function that sums   a TV term for image regularity and a least squares term for data fitting. Our algorithm, called RecPK, takes advantage of problem structures and has an extremely low per-iteration cost. To demonstrate the efficiency of RecPK, we  compare it with TwIST, a state-of-the-art algorithm for minimizing TV models. Moreover, we also demonstrate the usefulness of RecPK in image zooming.

  13. Synthesising Primary Reflections by Marchenko Redatuming and Convolutional Interferometry

    Science.gov (United States)

    Curtis, A.

    2015-12-01

    Standard active-source seismic processing and imaging steps such as velocity analysis and reverse time migration usually provide best results when all reflected waves in the input data are primaries (waves that reflect only once). Multiples (recorded waves that reflect multiple times) represent a source of coherent noise in data that must be suppressed to avoid imaging artefacts. Consequently, multiple-removal methods have been a primcipal direction of active-source seismic research for decades. We describe a new method to estimate primaries directly, which obviates the need for multiple removal. Primaries are constructed within convolutional interferometry by combining first arriving events of up-going and direct wave down-going Green's functions to virtual receivers in the subsurface. The required up-going wavefields to virtual receivers along discrete subsurface boundaries can be constructed using Marchenko redatuming. Crucially, this is possible without detailed models of the Earth's subsurface velocity structure: similarly to most migration techniques, the method only requires surface reflection data and estimates of direct (non-reflected) arrivals between subsurface sources and the acquisition surface. The method is demonstrated on a stratified synclinal model. It is shown both to improve reverse time migration compared to standard methods, and to be particularly robust against errors in the reference velocity model used.

  14. Convolution-based estimation of organ dose in tube current modulated CT

    Science.gov (United States)

    Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan

    2016-05-01

    Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The

  15. ARKCoS: Artifact-Suppressed Accelerated Radial Kernel Convolution on the Sphere

    CERN Document Server

    Elsner, Franz

    2011-01-01

    We describe a hybrid Fourier/direct space convolution algorithm for compact radial (azimuthally symmetric) kernels on the sphere. For high resolution maps covering a large fraction of the sky, our implementation takes advantage of the inexpensive massive parallelism afforded by consumer graphics processing units (GPUs). Applications involve modeling of instrumental beam shapes in terms of compact kernels, computation of fine-scale wavelet transformations, and optimal filtering for the detection of point sources. Our algorithm works for any pixelization where pixels are grouped into isolatitude rings. Even for kernels that are not bandwidth limited, ringing features are completely absent on an ECP grid. We demonstrate that they can be highly suppressed on the popular HEALPix pixelization, for which we develop a freely available implementation of the algorithm. As an example application, we show that running on a high-end consumer graphics card our method speeds up beam convolution for simulations of a characte...

  16. Visual and Textual Sentiment Analysis of a Microblog Using Deep Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Yuhai Yu

    2016-06-01

    Full Text Available Sentiment analysis of online social media has attracted significant interest recently. Many studies have been performed, but most existing methods focus on either only textual content or only visual content. In this paper, we utilize deep learning models in a convolutional neural network (CNN to analyze the sentiment in Chinese microblogs from both textual and visual content. We first train a CNN on top of pre-trained word vectors for textual sentiment analysis and employ a deep convolutional neural network (DNN with generalized dropout for visual sentiment analysis. We then evaluate our sentiment prediction framework on a dataset collected from a famous Chinese social media network (Sina Weibo that includes text and related images and demonstrate state-of-the-art results on this Chinese sentiment analysis benchmark.

  17. Using hybrid GPU/CPU kernel splitting to accelerate spherical convolutions

    Science.gov (United States)

    Sutter, P. M.; Wandelt, B. D.; Elsner, F.

    2015-06-01

    We present a general method for accelerating by more than an order of magnitude the convolution of pixelated functions on the sphere with a radially-symmetric kernel. Our method splits the kernel into a compact real-space component and a compact spherical harmonic space component. These components can then be convolved in parallel using an inexpensive commodity GPU and a CPU. We provide models for the computational cost of both real-space and Fourier space convolutions and an estimate for the approximation error. Using these models we can determine the optimum split that minimizes the wall clock time for the convolution while satisfying the desired error bounds. We apply this technique to the problem of simulating a cosmic microwave background (CMB) anisotropy sky map at the resolution typical of the high resolution maps produced by the Planck mission. For the main Planck CMB science channels we achieve a speedup of over a factor of ten, assuming an acceptable fractional rms error of order 10-5 in the power spectrum of the output map.

  18. Convolution power spectrum analysis for FMRI data based on prior image signal.

    Science.gov (United States)

    Zhang, Jiang; Chen, Huafu; Fang, Fang; Liao, Wei

    2010-02-01

    Functional MRI (fMRI) data-processing methods based on changes in the time domain involve, among other things, correlation analysis and use of the general linear model with statistical parametric mapping (SPM). Unlike conventional fMRI data analysis methods, which aim to model the blood-oxygen-level-dependent (BOLD) response of voxels as a function of time, the theory of power spectrum (PS) analysis focuses completely on understanding the dynamic energy change of interacting systems. We propose a new convolution PS (CPS) analysis of fMRI data, based on the theory of matched filtering, to detect brain functional activation for fMRI data. First, convolution signals are computed between the measured fMRI signals and the image signal of prior experimental pattern to suppress noise in the fMRI data. Then, the PS density analysis of the convolution signal is specified as the quantitative analysis energy index of BOLD signal change. The data from simulation studies and in vivo fMRI studies, including block-design experiments, reveal that the CPS method enables a more effective detection of some aspects of brain functional activation, as compared with the canonical PS SPM and the support vector machine methods. Our results demonstrate that the CPS method is useful as a complementary analysis in revealing brain functional information regarding the complex nature of fMRI time series.

  19. Noise-enhanced convolutional neural networks.

    Science.gov (United States)

    Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart

    2016-06-01

    Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives.

  20. Output-sensitive 3D line integral convolution.

    Science.gov (United States)

    Falk, Martin; Weiskopf, Daniel

    2008-01-01

    We propose an output-sensitive visualization method for 3D line integral convolution (LIC) whose rendering speed is largely independent of the data set size and mostly governed by the complexity of the output on the image plane. Our approach of view-dependent visualization tightly links the LIC generation with the volume rendering of the LIC result in order to avoid the computation of unnecessary LIC points: early-ray termination and empty-space leaping techniques are used to skip the computation of the LIC integral in a lazy-evaluation approach; both ray casting and texture slicing can be used as volume-rendering techniques. The input noise is modeled in object space to allow for temporal coherence under object and camera motion. Different noise models are discussed, covering dense representations based on filtered white noise all the way to sparse representations similar to oriented LIC. Aliasing artifacts are avoided by frequency control over the 3D noise and by employing a 3D variant of MIPmapping. A range of illumination models is applied to the LIC streamlines: different codimension-2 lighting models and a novel gradient-based illumination model that relies on precomputed gradients and does not require any direct calculation of gradients after the LIC integral is evaluated. We discuss the issue of proper sampling of the LIC and volume-rendering integrals by employing a frequency-space analysis of the noise model and the precomputed gradients. Finally, we demonstrate that our visualization approach lends itself to a fast graphics processing unit (GPU) implementation that supports both steady and unsteady flow. Therefore, this 3D LIC method allows users to interactively explore 3D flow by means of high-quality, view-dependent, and adaptive LIC volume visualization. Applications to flow visualization in combination with feature extraction and focus-and-context visualization are described, a comparison to previous methods is provided, and a detailed performance

  1. Joint multiple fully connected convolutional neural network with extreme learning machine for hepatocellular carcinoma nuclei grading.

    Science.gov (United States)

    Li, Siqi; Jiang, Huiyan; Pang, Wenbo

    2017-05-01

    Accurate cell grading of cancerous tissue pathological image is of great importance in medical diagnosis and treatment. This paper proposes a joint multiple fully connected convolutional neural network with extreme learning machine (MFC-CNN-ELM) architecture for hepatocellular carcinoma (HCC) nuclei grading. First, in preprocessing stage, each grayscale image patch with the fixed size is obtained using center-proliferation segmentation (CPS) method and the corresponding labels are marked under the guidance of three pathologists. Next, a multiple fully connected convolutional neural network (MFC-CNN) is designed to extract the multi-form feature vectors of each input image automatically, which considers multi-scale contextual information of deep layer maps sufficiently. After that, a convolutional neural network extreme learning machine (CNN-ELM) model is proposed to grade HCC nuclei. Finally, a back propagation (BP) algorithm, which contains a new up-sample method, is utilized to train MFC-CNN-ELM architecture. The experiment comparison results demonstrate that our proposed MFC-CNN-ELM has superior performance compared with related works for HCC nuclei grading. Meanwhile, external validation using ICPR 2014 HEp-2 cell dataset shows the good generalization of our MFC-CNN-ELM architecture. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. A quantum algorithm for Viterbi decoding of classical convolutional codes

    Science.gov (United States)

    Grice, Jon R.; Meyer, David A.

    2015-07-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.

  3. Very Deep Convolutional Neural Networks for Morphologic Classification of Erythrocytes.

    Science.gov (United States)

    Durant, Thomas J S; Olson, Eben M; Schulz, Wade L; Torres, Richard

    2017-09-06

    Morphologic profiling of the erythrocyte population is a widely used and clinically valuable diagnostic modality, but one that relies on a slow manual process associated with significant labor cost and limited reproducibility. Automated profiling of erythrocytes from digital images by capable machine learning approaches would augment the throughput and value of morphologic analysis. To this end, we sought to evaluate the performance of leading implementation strategies for convolutional neural networks (CNNs) when applied to classification of erythrocytes based on morphology. Erythrocytes were manually classified into 1 of 10 classes using a custom-developed Web application. Using recent literature to guide architectural considerations for neural network design, we implemented a "very deep" CNN, consisting of >150 layers, with dense shortcut connections. The final database comprised 3737 labeled cells. Ensemble model predictions on unseen data demonstrated a harmonic mean of recall and precision metrics of 92.70% and 89.39%, respectively. Of the 748 cells in the test set, 23 misclassification errors were made, with a correct classification frequency of 90.60%, represented as a harmonic mean across the 10 morphologic classes. These findings indicate that erythrocyte morphology profiles could be measured with a high degree of accuracy with "very deep" CNNs. Further, these data support future efforts to expand classes and optimize practical performance in a clinical environment as a prelude to full implementation as a clinical tool. © 2017 American Association for Clinical Chemistry.

  4. Predicting Semantic Descriptions from Medical Images with Convolutional Neural Networks.

    Science.gov (United States)

    Schlegl, Thomas; Waldstein, Sebastian M; Vogl, Wolf-Dieter; Schmidt-Erfurth, Ursula; Langs, Georg

    2015-01-01

    Learning representative computational models from medical imaging data requires large training data sets. Often, voxel-level annotation is unfeasible for sufficient amounts of data. An alternative to manual annotation, is to use the enormous amount of knowledge encoded in imaging data and corresponding reports generated during clinical routine. Weakly supervised learning approaches can link volume-level labels to image content but suffer from the typical label distributions in medical imaging data where only a small part consists of clinically relevant abnormal structures. In this paper we propose to use a semantic representation of clinical reports as a learning target that is predicted from imaging data by a convolutional neural network. We demonstrate how we can learn accurate voxel-level classifiers based on weak volume-level semantic descriptions on a set of 157 optical coherence tomography (OCT) volumes. We specifically show how semantic information increases classification accuracy for intraretinal cystoid fluid (IRC), subretinal fluid (SRF) and normal retinal tissue, and how the learning algorithm links semantic concepts to image content and geometry.

  5. Classification and Segmentation of Satellite Orthoimagery Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Martin Längkvist

    2016-04-01

    Full Text Available The availability of high-resolution remote sensing (HRRS data has opened up the possibility for new interesting applications, such as per-pixel classification of individual objects in greater detail. This paper shows how a convolutional neural network (CNN can be applied to multispectral orthoimagery and a digital surface model (DSM of a small city for a full, fast and accurate per-pixel classification. The predicted low-level pixel classes are then used to improve the high-level segmentation. Various design choices of the CNN architecture are evaluated and analyzed. The investigated land area is fully manually labeled into five categories (vegetation, ground, roads, buildings and water, and the classification accuracy is compared to other per-pixel classification works on other land areas that have a similar choice of categories. The results of the full classification and segmentation on selected segments of the map show that CNNs are a viable tool for solving both the segmentation and object recognition task for remote sensing data.

  6. Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields

    Science.gov (United States)

    Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo

    2016-01-01

    Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility.

  7. Building Extraction from Remote Sensing Data Using Fully Convolutional Networks

    Science.gov (United States)

    Bittner, K.; Cui, S.; Reinartz, P.

    2017-05-01

    Building detection and footprint extraction are highly demanded for many remote sensing applications. Though most previous works have shown promising results, the automatic extraction of building footprints still remains a nontrivial topic, especially in complex urban areas. Recently developed extensions of the CNN framework made it possible to perform dense pixel-wise classification of input images. Based on these abilities we propose a methodology, which automatically generates a full resolution binary building mask out of a Digital Surface Model (DSM) using a Fully Convolution Network (FCN) architecture. The advantage of using the depth information is that it provides geometrical silhouettes and allows a better separation of buildings from background as well as through its invariance to illumination and color variations. The proposed framework has mainly two steps. Firstly, the FCN is trained on a large set of patches consisting of normalized DSM (nDSM) as inputs and available ground truth building mask as target outputs. Secondly, the generated predictions from FCN are viewed as unary terms for a Fully connected Conditional Random Fields (FCRF), which enables us to create a final binary building mask. A series of experiments demonstrate that our methodology is able to extract accurate building footprints which are close to the buildings original shapes to a high degree. The quantitative and qualitative analysis show the significant improvements of the results in contrast to the multy-layer fully connected network from our previous work.

  8. BUILDING EXTRACTION FROM REMOTE SENSING DATA USING FULLY CONVOLUTIONAL NETWORKS

    Directory of Open Access Journals (Sweden)

    K. Bittner

    2017-05-01

    Full Text Available Building detection and footprint extraction are highly demanded for many remote sensing applications. Though most previous works have shown promising results, the automatic extraction of building footprints still remains a nontrivial topic, especially in complex urban areas. Recently developed extensions of the CNN framework made it possible to perform dense pixel-wise classification of input images. Based on these abilities we propose a methodology, which automatically generates a full resolution binary building mask out of a Digital Surface Model (DSM using a Fully Convolution Network (FCN architecture. The advantage of using the depth information is that it provides geometrical silhouettes and allows a better separation of buildings from background as well as through its invariance to illumination and color variations. The proposed framework has mainly two steps. Firstly, the FCN is trained on a large set of patches consisting of normalized DSM (nDSM as inputs and available ground truth building mask as target outputs. Secondly, the generated predictions from FCN are viewed as unary terms for a Fully connected Conditional Random Fields (FCRF, which enables us to create a final binary building mask. A series of experiments demonstrate that our methodology is able to extract accurate building footprints which are close to the buildings original shapes to a high degree. The quantitative and qualitative analysis show the significant improvements of the results in contrast to the multy-layer fully connected network from our previous work.

  9. On the Relationship between Visual Attributes and Convolutional Networks

    KAUST Repository

    Castillo, Victor

    2015-06-02

    One of the cornerstone principles of deep models is their abstraction capacity, i.e. their ability to learn abstract concepts from ‘simpler’ ones. Through extensive experiments, we characterize the nature of the relationship between abstract concepts (specifically objects in images) learned by popular and high performing convolutional networks (conv-nets) and established mid-level representations used in computer vision (specifically semantic visual attributes). We focus on attributes due to their impact on several applications, such as object description, retrieval and mining, and active (and zero-shot) learning. Among the findings we uncover, we show empirical evidence of the existence of Attribute Centric Nodes (ACNs) within a conv-net, which is trained to recognize objects (not attributes) in images. These special conv-net nodes (1) collectively encode information pertinent to visual attribute representation and discrimination, (2) are unevenly and sparsely distribution across all layers of the conv-net, and (3) play an important role in conv-net based object recognition.

  10. Innervation of the renal proximal convoluted tubule of the rat

    Energy Technology Data Exchange (ETDEWEB)

    Barajas, L.; Powers, K. (Harbor-UCLA Medical Center, Torrance (USA))

    1989-12-01

    Experimental data suggest the proximal tubule as a major site of neurogenic influence on tubular function. The functional and anatomical axial heterogeneity of the proximal tubule prompted this study of the distribution of innervation sites along the early, mid, and late proximal convoluted tubule (PCT) of the rat. Serial section autoradiograms, with tritiated norepinephrine serving as a marker for monoaminergic nerves, were used in this study. Freehand clay models and graphic reconstructions of proximal tubules permitted a rough estimation of the location of the innervation sites along the PCT. In the subcapsular nephrons, the early PCT (first third) was devoid of innervation sites with most of the innervation occurring in the mid (middle third) and in the late (last third) PCT. Innervation sites were found in the early PCT in nephrons located deeper in the cortex. In juxtamedullary nephrons, innervation sites could be observed on the PCT as it left the glomerulus. This gradient of PCT innervation can be explained by the different tubulovascular relationships of nephrons at different levels of the cortex. The absence of innervation sites in the early PCT of subcapsular nephrons suggests that any influence of the renal nerves on the early PCT might be due to an effect of neurotransmitter released from renal nerves reaching the early PCT via the interstitium and/or capillaries.

  11. Classification of breast cancer cytological specimen using convolutional neural network

    Science.gov (United States)

    Żejmo, Michał; Kowal, Marek; Korbicz, Józef; Monczak, Roman

    2017-01-01

    The paper presents a deep learning approach for automatic classification of breast tumors based on fine needle cytology. The main aim of the system is to distinguish benign from malignant cases based on microscopic images. Experiment was carried out on cytological samples derived from 50 patients (25 benign cases + 25 malignant cases) diagnosed in Regional Hospital in Zielona Góra. To classify microscopic images, we used convolutional neural networks (CNN) of two types: GoogLeNet and AlexNet. Due to the very large size of images of cytological specimen (on average 200000 × 100000 pixels), they were divided into smaller patches of size 256 × 256 pixels. Breast cancer classification usually is based on morphometric features of nuclei. Therefore, training and validation patches were selected using Support Vector Machine (SVM) so that suitable amount of cell material was depicted. Neural classifiers were tuned using GPU accelerated implementation of gradient descent algorithm. Training error was defined as a cross-entropy classification loss. Classification accuracy was defined as the percentage ratio of successfully classified validation patches to the total number of validation patches. The best accuracy rate of 83% was obtained by GoogLeNet model. We observed that more misclassified patches belong to malignant cases.

  12. Infimal convolution of total generalized variation functionals for dynamic MRI.

    Science.gov (United States)

    Schloegl, Matthias; Holler, Martin; Schwarzl, Andreas; Bredies, Kristian; Stollberger, Rudolf

    2017-07-01

    To accelerate dynamic MR applications using infimal convolution of total generalized variation functionals (ICTGV) as spatio-temporal regularization for image reconstruction. ICTGV comprises a new image prior tailored to dynamic data that achieves regularization via optimal local balancing between spatial and temporal regularity. Here it is applied for the first time to the reconstruction of dynamic MRI data. CINE and perfusion scans were investigated to study the influence of time dependent morphology and temporal contrast changes. ICTGV regularized reconstruction from subsampled MR data is formulated as a convex optimization problem. Global solutions are obtained by employing a duality based non-smooth optimization algorithm. The reconstruction error remains on a low level with acceleration factors up to 16 for both CINE and dynamic contrast-enhanced MRI data. The GPU implementation of the algorithm suites clinical demands by reducing reconstruction times of one dataset to less than 4 min. ICTGV based dynamic magnetic resonance imaging reconstruction allows for vast undersampling and therefore enables for very high spatial and temporal resolutions, spatial coverage and reduced scan time. With the proposed distinction of model and regularization parameters it offers a new and robust method of flexible decomposition into components with different degrees of temporal regularity. Magn Reson Med 78:142-155, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  13. Evaluation of convolutional neural networks for visual recognition.

    Science.gov (United States)

    Nebauer, C

    1998-01-01

    Convolutional neural networks provide an efficient method to constrain the complexity of feedforward neural networks by weight sharing and restriction to local connections. This network topology has been applied in particular to image classification when sophisticated preprocessing is to be avoided and raw images are to be classified directly. In this paper two variations of convolutional networks--neocognitron and a modification of neocognitron--are compared with classifiers based on fully connected feedforward layers (i.e., multilayer perceptron, nearest neighbor classifier, auto-encoding network) with respect to their visual recognition performance. Beside the original neocognitron a modification of the neocognitron is proposed which combines neurons from perceptron with the localized network structure of neocognitron. Instead of training convolutional networks by time-consuming error backpropagation, in this work a modular procedure is applied whereby layers are trained sequentially from the input to the output layer in order to recognize features of increasing complexity. For a quantitative experimental comparison with standard classifiers two very different recognition tasks have been chosen: handwritten digit recognition and face recognition. In the first example on handwritten digit recognition the generalization of convolutional networks is compared to fully connected networks. In several experiments the influence of variations of position, size, and orientation of digits is determined and the relation between training sample size and validation error is observed. In the second example recognition of human faces is investigated under constrained and variable conditions with respect to face orientation and illumination and the limitations of convolutional networks are discussed.

  14. Glaucoma detection based on deep convolutional neural network.

    Science.gov (United States)

    Xiangyu Chen; Yanwu Xu; Damon Wing Kee Wong; Tien Yin Wong; Jiang Liu

    2015-08-01

    Glaucoma is a chronic and irreversible eye disease, which leads to deterioration in vision and quality of life. In this paper, we develop a deep learning (DL) architecture with convolutional neural network for automated glaucoma diagnosis. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images to discriminate between glaucoma and non-glaucoma patterns for diagnostic decisions. The proposed DL architecture contains six learned layers: four convolutional layers and two fully-connected layers. Dropout and data augmentation strategies are adopted to further boost the performance of glaucoma diagnosis. Extensive experiments are performed on the ORIGA and SCES datasets. The results show area under curve (AUC) of the receiver operating characteristic curve in glaucoma detection at 0.831 and 0.887 in the two databases, much better than state-of-the-art algorithms. The method could be used for glaucoma detection.

  15. Two dimensional convolute integers for machine vision and image recognition

    Science.gov (United States)

    Edwards, Thomas R.

    1988-01-01

    Machine vision and image recognition require sophisticated image processing prior to the application of Artificial Intelligence. Two Dimensional Convolute Integer Technology is an innovative mathematical approach for addressing machine vision and image recognition. This new technology generates a family of digital operators for addressing optical images and related two dimensional data sets. The operators are regression generated, integer valued, zero phase shifting, convoluting, frequency sensitive, two dimensional low pass, high pass and band pass filters that are mathematically equivalent to surface fitted partial derivatives. These operators are applied non-recursively either as classical convolutions (replacement point values), interstitial point generators (bandwidth broadening or resolution enhancement), or as missing value calculators (compensation for dead array element values). These operators show frequency sensitive feature selection scale invariant properties. Such tasks as boundary/edge enhancement and noise or small size pixel disturbance removal can readily be accomplished. For feature selection tight band pass operators are essential. Results from test cases are given.

  16. Improving displayed resolution in convolution reconstruction of digital holograms

    Institute of Scientific and Technical Information of China (English)

    FAN Qi; ZHAO Jian-lin; ZHANG Yan-cao; WANG Jun; DI Jiang-lei

    2006-01-01

    In digital holographic microscopy,when the object is placed near the CCD,the Fresnel approximation is no longer valid and the convolution approach has to be applied.With this approach,the sampling spacing of the reconstructed image plane is equal to the pixel size of the CCD.If the lateral resolution of the reconstructed image is higher than that of the CCD,Nyquist sampling criterion is violated and aliasing errors will be introduced.In this Letter,a new method is proposed to solve this problem by investigating convolution reconstruction of digital holograms.By appending enough zeros to the angular spectrum between the two FFT's in convolution reconstruction of digital holograms,the displayed resolution of the reconstructed image can be improved.Experimental results show a good agreement with theoretical analysis.

  17. Throughput Scaling Of Convolution For Error-Tolerant Multimedia Applications

    CERN Document Server

    Anam, Mohammad Ashraful

    2012-01-01

    Convolution and cross-correlation are the basis of filtering and pattern or template matching in multimedia signal processing. We propose two throughput scaling options for any one-dimensional convolution kernel in programmable processors by adjusting the imprecision (distortion) of computation. Our approach is based on scalar quantization, followed by two forms of tight packing in floating-point (one of which is proposed in this paper) that allow for concurrent calculation of multiple results. We illustrate how our approach can operate as an optional pre- and post-processing layer for off-the-shelf optimized convolution routines. This is useful for multimedia applications that are tolerant to processing imprecision and for cases where the input signals are inherently noisy (error tolerant multimedia applications). Indicative experimental results with a digital music matching system and an MPEG-7 audio descriptor system demonstrate that the proposed approach offers up to 175% increase in processing throughput...

  18. Spatially variant convolution with scaled B-splines.

    Science.gov (United States)

    Muñoz-Barrutia, Arrate; Artaechevarria, Xabier; Ortiz-de-Solorzano, Carlos

    2010-01-01

    We present an efficient algorithm to compute multidimensional spatially variant convolutions--or inner products--between N-dimensional signals and B-splines--or their derivatives--of any order and arbitrary sizes. The multidimensional B-splines are computed as tensor products of 1-D B-splines, and the input signal is expressed in a B-spline basis. The convolution is then computed by using an adequate combination of integration and scaled finite differences as to have, for moderate and large scale values, a computational complexity that does not depend on the scaling factor. To show in practice the benefit of using our spatially variant convolution approach, we present an adaptive noise filter that adjusts the kernel size to the local image characteristics and a high sensitivity local ridge detector.

  19. The Existence of Strongly-MDS Convolutional Codes

    CERN Document Server

    Hutchinson, Ryan

    2008-01-01

    It is known that maximum distance separable and maximum distance profile convolutional codes exist over large enough finite fields of any characteristic for all parameters $(n,k,\\delta)$. It has been conjectured that the same is true for convolutional codes that are strongly maximum distance separable. Using methods from linear systems theory, we resolve this conjecture by showing that, over a large enough finite field of any characteristic, codes which are simultaneously maximum distance profile and strongly maximum distance separable exist for all parameters $(n,k,\\delta)$.

  20. Convolutional cylinder-type block-circulant cycle codes

    Directory of Open Access Journals (Sweden)

    Mohammad Gholami

    2013-06-01

    Full Text Available In this paper, we consider a class of column-weight two quasi-cyclic low-density paritycheck codes in which the girth can be large enough, as an arbitrary multiple of 8. Then we devote a convolutional form to these codes, such that their generator matrix can be obtained by elementary row and column operations on the parity-check matrix. Finally, we show that the free distance of the convolutional codes is equal to the minimum distance of their block counterparts.

  1. Inferring low-dimensional microstructure representations using convolutional neural networks

    CERN Document Server

    Lubbers, Nicholas; Barros, Kipton

    2016-01-01

    We apply recent advances in machine learning and computer vision to a central problem in materials informatics: The statistical representation of microstructural images. We use activations in a pre-trained convolutional neural network to provide a high-dimensional characterization of a set of synthetic microstructural images. Next, we use manifold learning to obtain a low-dimensional embedding of this statistical characterization. We show that the low-dimensional embedding extracts the parameters used to generate the images. According to a variety of metrics, the convolutional neural network method yields dramatically better embeddings than the analogous method derived from two-point correlations alone.

  2. Generalized Binomial Convolution of the mth Powers of the Consecutive Integers with the General Fibonacci Sequence

    Directory of Open Access Journals (Sweden)

    Kılıç Emrah

    2016-12-01

    Full Text Available In this paper, we consider Gauthier’s generalized convolution and then define its binomial analogue as well as alternating binomial analogue. We formulate these convolutions and give some applications of them.

  3. Estimating the number of sources in a noisy convolutive mixture using BIC

    DEFF Research Database (Denmark)

    Olsson, Rasmus Kongsgaard; Hansen, Lars Kai

    2004-01-01

    posterior probability of the sources conditioned on the observations is obtained. The log-likelihood of the parameters is computed exactly in the process, which allows for model evidence comparison assisted by the BIC approximation. This is used to determine the activity pattern of two speakers......The number of source signals in a noisy convolutive mixture is determined based on the exact log-likelihoods of the candidate models. In (Olsson and Hansen, 2004), a novel probabilistic blind source separator was introduced that is based solely on the time-varying second-order statistics...

  4. The Kinetic Energy of Hydrocarbons as a Function of Electron Density and Convolutional Neural Networks

    CERN Document Server

    Yao, Kun

    2015-01-01

    We demonstrate a convolutional neural network trained to reproduce the Kohn-Sham kinetic energy of hydrocarbons from electron density. The output of the network is used as a non-local correction to the conventional local and semi-local kinetic functionals. We show that this approximation qualitatively reproduces Kohn-Sham potential energy surfaces when used with conventional exchange correlation functionals. Numerical noise inherited from the non-linearity of the neural network is identified as the major challenge for the model. Finally we examine the features in the density learned by the neural network to anticipate the prospects of generalizing these models.

  5. Radio Signal Augmentation for Improved Training of a Convolutional Neural Network

    Science.gov (United States)

    2016-09-01

    parameters of the network. Examples of these parameters include: • Input data dimensions and channels (e.g., image size and colors) • Size of convolutional ...filters • Number of convolutional filters • Pooling/downsampling size and method (e.g., max-pool or average) • Number of convolution and pooling...TECHNICAL REPORT 3055 September 2016 Radio Signal Augmentation for Improved Training of a Convolutional Neural Network Daniel

  6. Single-trial EEG RSVP classification using convolutional neural networks

    Science.gov (United States)

    Shamwell, Jared; Lee, Hyungtae; Kwon, Heesung; Marathe, Amar R.; Lawhern, Vernon; Nothwang, William

    2016-05-01

    Traditionally, Brain-Computer Interfaces (BCI) have been explored as a means to return function to paralyzed or otherwise debilitated individuals. An emerging use for BCIs is in human-autonomy sensor fusion where physiological data from healthy subjects is combined with machine-generated information to enhance the capabilities of artificial systems. While human-autonomy fusion of physiological data and computer vision have been shown to improve classification during visual search tasks, to date these approaches have relied on separately trained classification models for each modality. We aim to improve human-autonomy classification performance by developing a single framework that builds codependent models of human electroencephalograph (EEG) and image data to generate fused target estimates. As a first step, we developed a novel convolutional neural network (CNN) architecture and applied it to EEG recordings of subjects classifying target and non-target image presentations during a rapid serial visual presentation (RSVP) image triage task. The low signal-to-noise ratio (SNR) of EEG inherently limits the accuracy of single-trial classification and when combined with the high dimensionality of EEG recordings, extremely large training sets are needed to prevent overfitting and achieve accurate classification from raw EEG data. This paper explores a new deep CNN architecture for generalized multi-class, single-trial EEG classification across subjects. We compare classification performance from the generalized CNN architecture trained across all subjects to the individualized XDAWN, HDCA, and CSP neural classifiers which are trained and tested on single subjects. Preliminary results show that our CNN meets and slightly exceeds the performance of the other classifiers despite being trained across subjects.

  7. Efficient Partitioning of Algorithms for Long Convolutions and their Mapping onto Architectures

    NARCIS (Netherlands)

    Bierens, L.; Deprettere, E.

    1998-01-01

    We present an efficient approach for the partitioning of algorithms implementing long convolutions. The dependence graph (DG) of a convolution algorithm is locally sequential globally parallel (LSGP) partitioned into smaller, less complex convolution algorithms. The LSGP partitioned DG is mapped ont

  8. General Purpose Convolution Algorithm in S4 Classes by Means of FFT

    Directory of Open Access Journals (Sweden)

    Peter Ruckdeschel

    2014-08-01

    By means of object orientation this default algorithm is overloaded by more specific algorithms where possible, in particular where explicit convolution formulae are available. Our focus is on R package distr which implements this approach, overloading operator + for convolution; based on this convolution, we define a whole arithmetics of mathematical operations acting on distribution objects, comprising operators +, -, *, /, and ^.

  9. Application of the Convolution Formalism to the Ocean Tide Potential: Results from the Gravity and Recovery and Climate Experiment (GRACE)

    Science.gov (United States)

    Desai, S. D.; Yuan, D. -N.

    2006-01-01

    A computationally efficient approach to reducing omission errors in ocean tide potential models is derived and evaluated using data from the Gravity Recovery and Climate Experiment (GRACE) mission. Ocean tide height models are usually explicitly available at a few frequencies, and a smooth unit response is assumed to infer the response across the tidal spectrum. The convolution formalism of Munk and Cartwright (1966) models this response function with a Fourier series. This allows the total ocean tide height, and therefore the total ocean tide potential, to be modeled as a weighted sum of past, present, and future values of the tide-generating potential. Previous applications of the convolution formalism have usually been limited to tide height models, but we extend it to ocean tide potential models. We use luni-solar ephemerides to derive the required tide-generating potential so that the complete spectrum of the ocean tide potential is efficiently represented. In contrast, the traditionally adopted harmonic model of the ocean tide potential requires the explicit sum of the contributions from individual tidal frequencies. It is therefore subject to omission errors from neglected frequencies and is computationally more intensive. Intersatellite range rate data from the GRACE mission are used to compare convolution and harmonic models of the ocean tide potential. The monthly range rate residual variance is smaller by 4-5%, and the daily residual variance is smaller by as much as 15% when using the convolution model than when using a harmonic model that is defined by twice the number of parameters.

  10. Robust Fusion of Irregularly Sampled Data Using Adaptive Normalized Convolution

    NARCIS (Netherlands)

    Pham, T.Q.; Van Vliet, L.J.; Schutte, K.

    2006-01-01

    We present a novel algorithm for image fusion from irregularly sampled data. The method is based on the framework of normalized convolution (NC), in which the local signal is approximated through a projection onto a subspace. The use of polynomial basis functions in this paper makes NC equivalent to

  11. A single Chip Implementation for Fast Convolution of Long Sequences

    NARCIS (Netherlands)

    Zwartenkot, H.T.J.; Boerrigter, M.J.G.; Bierens, L.H.J.; Smit, J.

    1996-01-01

    Usually, long convolutions are computed by programmable DSP boards using long FFTs. Typical operational requirements such as minimum power dissipation, minimum volume and high dynamic range/accuracy, make this solution often inefficient and even unacceptable. In this paper we present a single chip f

  12. Face recognition: a convolutional neural-network approach.

    Science.gov (United States)

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  13. Convolution operators defined by singular measures on the motion group

    CERN Document Server

    Brandolini, Luca; Thangavelu, Sundaram; Travaglini, Giancarlo

    2010-01-01

    This paper contains an $L^{p}$ improving result for convolution operators defined by singular measures associated to hypersurfaces on the motion group. This needs only mild geometric properties of the surfaces, and it extends earlier results on Radon type transforms on $\\mathbb{R}^{n}$. The proof relies on the harmonic analysis on the motion group.

  14. Behaviour at infinity of solutions of twisted convolution equations

    Energy Technology Data Exchange (ETDEWEB)

    Volchkov, Valerii V; Volchkov, Vitaly V [Donetsk National University, Donetsk (Ukraine)

    2012-02-28

    We obtain a precise characterization of the minimal rate of growth at infinity of non-trivial solutions of twisted convolution equations in unbounded domains of C{sup n}. As an application, we obtain definitive versions of the two-radii theorem for twisted spherical means.

  15. Two-level convolution formula for nuclear structure function

    Science.gov (United States)

    Ma, Boqiang

    1990-05-01

    A two-level convolution formula for the nuclear structure function is derived in considering the nucleus as a composite system of baryon-mesons which are also composite systems of quark-gluons again. The results show that the European Muon Colaboration effect can not be explained by the nuclear effects as nucleon Fermi motion and nuclear binding contributions.

  16. Two-Dimensional Tail-Biting Convolutional Codes

    CERN Document Server

    Alfandary, Liam

    2011-01-01

    The multidimensional convolutional codes are an extension of the notion of convolutional codes (CCs) to several dimensions of time. This paper explores the class of two-dimensional convolutional codes (2D CCs) and 2D tail-biting convolutional codes (2D TBCCs), in particular, from several aspects. First, we derive several basic algebraic properties of these codes, applying algebraic methods in order to find bijective encoders, create parity check matrices and to inverse encoders. Next, we discuss the minimum distance and weight distribution properties of these codes. Extending an existing tree-search algorithm to two dimensions, we apply it to find codes with high minimum distance. Word-error probability asymptotes for sample codes are given and compared with other codes. The results of this approach suggest that 2D TBCCs can perform better than comparable 1D TBCCs or other codes. We then present several novel iterative suboptimal algorithms for soft decoding 2D CCs, which are based on belief propagation. Two ...

  17. Yetter-Drinfel‘d Module and Convolution Module

    Institute of Scientific and Technical Information of China (English)

    张良云; 王栓宏

    2002-01-01

    In this paper,we first give a sufficent and necessary condition for a Hopf algebra to be a Yetter-Drinfel'd module,and prove that the finite dual of a Yetter-Drinfel'd module is still a Yetter-Drinfel'd module,Finally,we introduce a concept of convolution module.

  18. On the generalized Hamming weights of convolutional codes

    NARCIS (Netherlands)

    Rosenthal, J.; York, E.V.

    1995-01-01

    Motivated by applications in cryptology K. Wei introduced in 1991 the concept of a generalized Hamming weight for a linear block code. In this paper we define generalized Hamming weights for the class of convolutional codes and we derive several of their basic properties. By restricting to convoluti

  19. Maximum-likelihood estimation of circle parameters via convolution.

    Science.gov (United States)

    Zelniker, Emanuel E; Clarkson, I Vaughan L

    2006-04-01

    The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images.

  20. Robust Fusion of Irregularly Sampled Data Using Adaptive Normalized Convolution

    NARCIS (Netherlands)

    Pham, T.Q.; Vliet, L.J. van; Schutte, K.

    2006-01-01

    We present a novel algorithm for image fusion from irregularly sampled data. The method is based on the framework of normalized convolution (NC), in which the local signal is approximated through a projection onto a subspace. The use of polynomial basis functions in this paper makes NC equivalent to

  1. A single Chip Implementation for Fast Convolution of Long Sequences

    NARCIS (Netherlands)

    Zwartenkot, H.T.J.; Boerrigter, M.J.G.; Bierens, L.H.J.; Smit, J.

    1996-01-01

    Usually, long convolutions are computed by programmable DSP boards using long FFTs. Typical operational requirements such as minimum power dissipation, minimum volume and high dynamic range/accuracy, make this solution often inefficient and even unacceptable. In this paper we present a single chip

  2. A convolutional neural network approach for objective video quality assessment.

    Science.gov (United States)

    Le Callet, Patrick; Viard-Gaudin, Christian; Barba, Dominique

    2006-09-01

    This paper describes an application of neural networks in the field of objective measurement method designed to automatically assess the perceived quality of digital videos. This challenging issue aims to emulate human judgment and to replace very complex and time consuming subjective quality assessment. Several metrics have been proposed in literature to tackle this issue. They are based on a general framework that combines different stages, each of them addressing complex problems. The ambition of this paper is not to present a global perfect quality metric but rather to focus on an original way to use neural networks in such a framework in the context of reduced reference (RR) quality metric. Especially, we point out the interest of such a tool for combining features and pooling them in order to compute quality scores. The proposed approach solves some problems inherent to objective metrics that should predict subjective quality score obtained using the single stimulus continuous quality evaluation (SSCQE) method. This latter has been adopted by video quality expert group (VQEG) in its recently finalized reduced referenced and no reference (RRNR-TV) test plan. The originality of such approach compared to previous attempts to use neural networks for quality assessment, relies on the use of a convolutional neural network (CNN) that allows a continuous time scoring of the video. Objective features are extracted on a frame-by-frame basis on both the reference and the distorted sequences; they are derived from a perceptual-based representation and integrated along the temporal axis using a time-delay neural network (TDNN). Experiments conducted on different MPEG-2 videos, with bit rates ranging 2-6 Mb/s, show the effectiveness of the proposed approach to get a plausible model of temporal pooling from the human vision system (HVS) point of view. More specifically, a linear correlation criteria, between objective and subjective scoring, up to 0.92 has been obtained on

  3. Text-Attentional Convolutional Neural Network for Scene Text Detection

    Science.gov (United States)

    He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian

    2016-06-01

    Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature computed globally from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this work, we present a new system for scene text detection by proposing a novel Text-Attentional Convolutional Neural Network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/nontext information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates main task of text/non-text classification. In addition, a powerful low-level detector called Contrast- Enhancement Maximally Stable Extremal Regions (CE-MSERs) is developed, which extends the widely-used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 dataset, with a F-measure of 0.82, improving the state-of-the-art results substantially.

  4. Text-Attentional Convolutional Neural Network for Scene Text Detection.

    Science.gov (United States)

    He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian

    2016-06-01

    Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results.

  5. Text-Attentional Convolutional Neural Networks for Scene Text Detection.

    Science.gov (United States)

    He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian

    2016-03-28

    Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature computed globally from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this work, we present a new system for scene text detection by proposing a novel Text-Attentional Convolutional Neural Network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/nontext information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates main task of text/non-text classification. In addition, a powerful low-level detector called Contrast- Enhancement Maximally Stable Extremal Regions (CE-MSERs) is developed, which extends the widely-used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 dataset, with a F-measure of 0.82, improving the state-of-the-art results substantially.

  6. Multi-Scale Rotation-Invariant Convolutional Neural Networks for Lung Texture Classification.

    Science.gov (United States)

    Wang, Qiangchang; Zheng, Yuanjie; Yang, Gongping; Jin, Weidong; Chen, Xinjian; Yin, Yilong

    2017-03-21

    We propose a new Multi-scale Rotation-invariant Convolutional Neural Network (MRCNN) model for classifying various lung tissue types on high-resolution computed tomography (HRCT). MRCNN employs Gabor-local binary pattern (Gabor-LBP) which introduces a good property in image analysis - invariance to image scales and rotations. In addition, we offer an approach to deal with the problems caused by imbalanced number of samples between different classes in most of the existing works, accomplished by changing the overlapping size between the adjacent patches. Experimental results on a public Interstitial Lung Disease (ILD) database show a superior performance of the proposed method to state-of-the-art.

  7. Architectural style classification of Mexican historical buildings using deep convolutional neural networks and sparse features

    Science.gov (United States)

    Obeso, Abraham Montoya; Benois-Pineau, Jenny; Acosta, Alejandro Álvaro Ramirez; Vázquez, Mireya Saraí García

    2017-01-01

    We propose a convolutional neural network to classify images of buildings using sparse features at the network's input in conjunction with primary color pixel values. As a result, a trained neuronal model is obtained to classify Mexican buildings in three classes according to the architectural styles: prehispanic, colonial, and modern with an accuracy of 88.01%. The problem of poor information in a training dataset is faced due to the unequal availability of cultural material. We propose a data augmentation and oversampling method to solve this problem. The results are encouraging and allow for prefiltering of the content in the search tasks.

  8. Convolutional Sparse Coding for Static and Dynamic Images Analysis

    Directory of Open Access Journals (Sweden)

    B. A. Knyazev

    2014-01-01

    Full Text Available The objective of this work is to improve performance of static and dynamic objects recognition. For this purpose a new image representation model and a transformation algorithm are proposed. It is examined and illustrated that limitations of previous methods make it difficult to achieve this objective. Static images, specifically handwritten digits of the widely used MNIST dataset, are the primary focus of this work. Nevertheless, preliminary qualitative results of image sequences analysis based on the suggested model are presented.A general analytical form of the Gabor function, often employed to generate filters, is described and discussed. In this research, this description is required for computing parameters of responses returned by our algorithm. The recursive convolution operator is introduced, which allows extracting free shape features of visual objects. The developed parametric representation model is compared with sparse coding based on energy function minimization.In the experimental part of this work, errors of estimating the parameters of responses are determined. Also, parameters statistics and their correlation coefficients for more than 106 responses extracted from the MNIST dataset are calculated. It is demonstrated that these data correspond well with previous research studies on Gabor filters as well as with works on visual cortex primary cells of mammals, in which similar responses were observed. A comparative test of the developed model with three other approaches is conducted; speed and accuracy scores of handwritten digits classification are presented. A support vector machine with a linear or radial basic function is used for classification of images and their representations while principal component analysis is used in some cases to prepare data beforehand. High accuracy is not attained due to the specific difficulties of combining our model with a support vector machine (a 3.99% error rate. However, another method is

  9. Image super resolution using deep convolutional network based on topology aggregation structure

    Science.gov (United States)

    Yang, Fan; Xu, Wei; Tian, Yapeng

    2017-08-01

    In this paper, we propose a new architecture of the deep convolutional network for single-image super-resolution (SR). Our convolutional network is inspired by GoogLeNet and Res Ne Xt, improved on VDSR which is a representative state-of-the-art method for deep learning-based SR approach. In the field of image super-resolution, we pioneer using the topology aggregation method to improve the network structure. Our network is constructed by repeating the same blocks and each block has the same uniform topology aggregation structure. This design results in reducing the amount of network parameters, so as to increase the depth of the network, thereby enhancing the image super-resolution performance. The design of the network is to take into account both computing performance and practicality. In addition to the performance, the size of model is also important. Experiments show that if the depth of our network is 20 layers, as same as VDSR, our model size is smaller than VDSR 1/3 and the performance is as good as VDSR. Moreover, if we set our model size is as same as VDSR's model size, the depth of our network can be increased to 32 layers, and the performance is better than VDSR.

  10. An Implementation of Error Minimization Data Transmission in OFDM using Modified Convolutional Code

    Directory of Open Access Journals (Sweden)

    Hendy Briantoro

    2016-04-01

    Full Text Available This paper presents about error minimization in OFDM system. In conventional system, usually using channel coding such as BCH Code or Convolutional Code. But, performance BCH Code or Convolutional Code is not good in implementation of OFDM System. Error bits of OFDM system without channel coding is 5.77%. Then, we used convolutional code with code rate 1/2, it can reduce error bitsonly up to 3.85%. So, we proposed OFDM system with Modified Convolutional Code. In this implementation, we used Software Define Radio (SDR, namely Universal Software Radio Peripheral (USRP NI 2920 as the transmitter and receiver. The result of OFDM system using Modified Convolutional Code with code rate is able recover all character received so can decrease until 0% error bit. Increasing performance of Modified Convolutional Code is about 1 dB in BER of 10-4 from BCH Code and Convolutional Code. So, performance of Modified Convolutional better than BCH Code or Convolutional Code. Keywords: OFDM, BCH Code, Convolutional Code, Modified Convolutional Code, SDR, USRP

  11. Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator

    Directory of Open Access Journals (Sweden)

    E. M. Waisman

    2014-12-01

    Full Text Available Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM. Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator’s vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator’s vacuum-insulator stack (at a radius of 1.6 m by using standard D-dot and B-dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator’s magnetically insulated transmission lines (MITLs and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z. These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010

  12. A perceptual quantization strategy for HEVC based on a convolutional neural network trained on natural images

    Science.gov (United States)

    Alam, Md Mushfiqul; Nguyen, Tuan D.; Hagan, Martin T.; Chandler, Damon M.

    2015-09-01

    Fast prediction models of local distortion visibility and local quality can potentially make modern spatiotemporally adaptive coding schemes feasible for real-time applications. In this paper, a fast convolutional-neural- network based quantization strategy for HEVC is proposed. Local artifact visibility is predicted via a network trained on data derived from our improved contrast gain control model. The contrast gain control model was trained on our recent database of local distortion visibility in natural scenes [Alam et al. JOV 2014]. Further- more, a structural facilitation model was proposed to capture effects of recognizable structures on distortion visibility via the contrast gain control model. Our results provide on average 11% improvements in compression efficiency for spatial luma channel of HEVC while requiring almost one hundredth of the computational time of an equivalent gain control model. Our work opens the doors for similar techniques which may work for different forthcoming compression standards.

  13. BrainNetCNN: Convolutional neural networks for brain networks; towards predicting neurodevelopment.

    Science.gov (United States)

    Kawahara, Jeremy; Brown, Colin J; Miller, Steven P; Booth, Brian G; Chau, Vann; Grunau, Ruth E; Zwicker, Jill G; Hamarneh, Ghassan

    2017-02-01

    We propose BrainNetCNN, a convolutional neural network (CNN) framework to predict clinical neurodevelopmental outcomes from brain networks. In contrast to the spatially local convolutions done in traditional image-based CNNs, our BrainNetCNN is composed of novel edge-to-edge, edge-to-node and node-to-graph convolutional filters that leverage the topological locality of structural brain networks. We apply the BrainNetCNN framework to predict cognitive and motor developmental outcome scores from structural brain networks of infants born preterm. Diffusion tensor images (DTI) of preterm infants, acquired between 27 and 46 weeks gestational age, were used to construct a dataset of structural brain connectivity networks. We first demonstrate the predictive capabilities of BrainNetCNN on synthetic phantom networks with simulated injury patterns and added noise. BrainNetCNN outperforms a fully connected neural-network with the same number of model parameters on both phantoms with focal and diffuse injury patterns. We then apply our method to the task of joint prediction of Bayley-III cognitive and motor scores, assessed at 18 months of age, adjusted for prematurity. We show that our BrainNetCNN framework outperforms a variety of other methods on the same data. Furthermore, BrainNetCNN is able to identify an infant's postmenstrual age to within about 2 weeks. Finally, we explore the high-level features learned by BrainNetCNN by visualizing the importance of each connection in the brain with respect to predicting the outcome scores. These findings are then discussed in the context of the anatomy and function of the developing preterm infant brain.

  14. Transferring Deep Convolutional Neural Networks for the Scene Classification of High-Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Fan Hu

    2015-11-01

    Full Text Available Learning efficient image representations is at the core of the scene classification task of remote sensing imagery. The existing methods for solving the scene classification task, based on either feature coding approaches with low-level hand-engineered features or unsupervised feature learning, can only generate mid-level image features with limited representative ability, which essentially prevents them from achieving better performance. Recently, the deep convolutional neural networks (CNNs, which are hierarchical architectures trained on large-scale datasets, have shown astounding performance in object recognition and detection. However, it is still not clear how to use these deep convolutional neural networks for high-resolution remote sensing (HRRS scene classification. In this paper, we investigate how to transfer features from these successfully pre-trained CNNs for HRRS scene classification. We propose two scenarios for generating image features via extracting CNN features from different layers. In the first scenario, the activation vectors extracted from fully-connected layers are regarded as the final image features; in the second scenario, we extract dense features from the last convolutional layer at multiple scales and then encode the dense features into global image features through commonly used feature coding approaches. Extensive experiments on two public scene classification datasets demonstrate that the image features obtained by the two proposed scenarios, even with a simple linear classifier, can result in remarkable performance and improve the state-of-the-art by a significant margin. The results reveal that the features from pre-trained CNNs generalize well to HRRS datasets and are more expressive than the low- and mid-level features. Moreover, we tentatively combine features extracted from different CNN models for better performance.

  15. Image Super-Resolution Using Deep Convolutional Networks.

    Science.gov (United States)

    Dong, Chao; Loy, Chen Change; He, Kaiming; Tang, Xiaoou

    2016-02-01

    We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.

  16. Convolution theorems for the linear canonical transform and their applications

    Institute of Scientific and Technical Information of China (English)

    DENG Bing; TAO Ran; WANG Yue

    2006-01-01

    As generalization of the fractional Fourier transform (FRFT), the linear canonical transform (LCT) has been used in several areas, including optics and signal processing. Many properties for this transform are already known, but the convolution theorems, similar to the version of the Fourier transform, are still to be determined. In this paper, the authors derive the convolution theorems for the LCT, and explore the sampling theorem and multiplicative filter for the band limited signal in the linear canonical domain. Finally, the sampling and reconstruction formulas are deduced, together with the construction methodology for the above mentioned multiplicative filter in the time domain based on fast Fourier transform (FFT), which has much lower computational load than the construction method in the linear canonical domain.

  17. Star-galaxy Classification Using Deep Convolutional Neural Networks

    CERN Document Server

    Kim, Edward J

    2016-01-01

    Most existing star-galaxy classifiers use the reduced summary information from catalogs, requiring careful feature extraction and selection. The latest advances in machine learning that use deep convolutional neural networks allow a machine to automatically learn the features directly from data, minimizing the need for input from human experts. We present a star-galaxy classification framework that uses deep convolutional neural networks (ConvNets) directly on the reduced, calibrated pixel values. Using data from the Sloan Digital Sky Survey (SDSS) and the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS), we demonstrate that ConvNets are able to produce accurate and well-calibrated probabilistic classifications that are competitive with conventional machine learning techniques. Future advances in deep learning may bring more success with current and forthcoming photometric surveys, such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST), because deep neural networks require...

  18. a Convolutional Network for Semantic Facade Segmentation and Interpretation

    Science.gov (United States)

    Schmitz, Matthias; Mayer, Helmut

    2016-06-01

    In this paper we present an approach for semantic interpretation of facade images based on a Convolutional Network. Our network processes the input images in a fully convolutional way and generates pixel-wise predictions. We show that there is no need for large datasets to train the network when transfer learning is employed, i. e., a part of an already existing network is used and fine-tuned, and when the available data is augmented by using deformed patches of the images for training. The network is trained end-to-end with patches of the images and each patch is augmented independently. To undo the downsampling for the classification, we add deconvolutional layers to the network. Outputs of different layers of the network are combined to achieve more precise pixel-wise predictions. We demonstrate the potential of our network based on results for the eTRIMS (Korč and Förstner, 2009) dataset reduced to facades.

  19. Self-Taught convolutional neural networks for short text clustering.

    Science.gov (United States)

    Xu, Jiaming; Xu, Bo; Wang, Peng; Zheng, Suncong; Tian, Guanhua; Zhao, Jun; Xu, Bo

    2017-01-12

    Short text clustering is a challenging problem due to its sparseness of text representation. Here we propose a flexible Self-Taught Convolutional neural network framework for Short Text Clustering (dubbed STC(2)), which can flexibly and successfully incorporate more useful semantic features and learn non-biased deep text representation in an unsupervised manner. In our framework, the original raw text features are firstly embedded into compact binary codes by using one existing unsupervised dimensionality reduction method. Then, word embeddings are explored and fed into convolutional neural networks to learn deep feature representations, meanwhile the output units are used to fit the pre-trained binary codes in the training process. Finally, we get the optimal clusters by employing K-means to cluster the learned representations. Extensive experimental results demonstrate that the proposed framework is effective, flexible and outperform several popular clustering methods when tested on three public short text datasets.

  20. Trajectory Generation Method with Convolution Operation on Velocity Profile

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Geon [Hanyang Univ., Seoul (Korea, Republic of); Kim, Doik [Korea Institute of Science and Technology, Daejeon (Korea, Republic of)

    2014-03-15

    The use of robots is no longer limited to the field of industrial robots and is now expanding into the fields of service and medical robots. In this light, a trajectory generation method that can respond instantaneously to the external environment is strongly required. Toward this end, this study proposes a method that enables a robot to change its trajectory in real-time using a convolution operation. The proposed method generates a trajectory in real time and satisfies the physical limits of the robot system such as acceleration and velocity limit. Moreover, a new way to improve the previous method, which generates inefficient trajectories in some cases owing to the characteristics of the trapezoidal shape of trajectories, is proposed by introducing a triangle shape. The validity and effectiveness of the proposed method is shown through a numerical simulation and a comparison with the previous convolution method.

  1. On New Bijective Convolution Operator Act for Analytic Functions

    Directory of Open Access Journals (Sweden)

    Oqlah Al-Refai

    2009-01-01

    Full Text Available Problem statement: We introduced a new bijective convolution linear operator defined on the class of normalized analytic functions. This operator was motivated by many researchers namely Srivastava, Owa, Ruscheweyh and many others. The operator was essential to obtain new classes of analytic functions. Approach: Simple technique of Ruscheweyh was used in our preliminary approach to create new bijective convolution linear operator. The preliminary concept of Hadamard products was mentioned and the concept of subordination was given to give sharp proofs for certain sufficient conditions of the linear operator aforementioned. In fact, the subordinating factor sequence was used to derive different types of subordination results. Results: Having the linear operator, subordination theorems were established by using standard concept of subordination. The results reduced to well-known results studied by various researchers. Coefficient bounds and inclusion properties, growth and closure theorems for some subclasses were also obtained. Conclusion: Therefore, many interesting results could be obtained and some applications could be gathered.

  2. Fibonacci Sequence, Recurrence Relations, Discrete Probability Distributions and Linear Convolution

    CERN Document Server

    Rajan, Arulalan; Rao, Ashok; Jamadagni, H S

    2012-01-01

    The classical Fibonacci sequence is known to exhibit many fascinating properties. In this paper, we explore the Fibonacci sequence and integer sequences generated by second order linear recurrence relations with positive integer coe?cients from the point of view of probability distributions that they induce. We obtain the generalizations of some of the known limiting properties of these probability distributions and present certain optimal properties of the classical Fibonacci sequence in this context. In addition, we also look at the self linear convolution of linear recurrence relations with positive integer coefficients. Analysis of self linear convolution is focused towards locating the maximum in the resulting sequence. This analysis, also highlights the influence that the largest positive real root, of the "characteristic equation" of the linear recurrence relations with positive integer coefficients, has on the location of the maximum. In particular, when the largest positive real root is 2,the locatio...

  3. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  4. Protein secondary structure prediction using deep convolutional neural fields

    OpenAIRE

    Sheng Wang; Jian Peng; Jianzhu Ma; Jinbo Xu

    2015-01-01

    Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF)...

  5. Interleaved Convolutional Code and Its Viterbi Decoder Architecture

    OpenAIRE

    2003-01-01

    We propose an area-efficient high-speed interleaved Viterbi decoder architecture, which is based on the state-parallel architecture with register exchange path memory structure, for interleaved convolutional code. The state-parallel architecture uses as many add-compare-select (ACS) units as the number of trellis states. By replacing each delay (or storage) element in state metrics memory (or path metrics memory) and path memory (or survival memory) with delays, interleaved Viterbi decoder ...

  6. Design of SVD/SGK Convolution Filters for Image Processing

    Science.gov (United States)

    1980-01-01

    of filters by transforming one-dimensional linear phase filters * into two-dimensional linear phase filters . By assuming that the prototype filter is a...linear phase filter , his algorithm transforms a one-dimensional filter h(u) into a two-dimensional filter W (u,v) by means of transformation given by...significance of their implementation of the designed filter is that a large two-dimensional convolution *A linear phase filter implies symmetry of the filter. 13

  7. Efficient Convolutional Neural Network with Binary Quantization Layer

    OpenAIRE

    Ravanbakhsh, Mahdyar; Mousavi, Hossein; Nabi, Moin; Marcenaro, Lucio; Regazzoni, Carlo

    2016-01-01

    In this paper we introduce a novel method for segmentation that can benefit from general semantics of Convolutional Neural Network (CNN). Our segmentation proposes visually and semantically coherent image segments. We use binary encoding of CNN features to overcome the difficulty of the clustering on the high-dimensional CNN feature space. These binary encoding can be embedded into the CNN as an extra layer at the end of the network. This results in real-time segmentation. To the best of our ...

  8. Fusing Deep Convolutional Networks for Large Scale Visual Concept Classification

    OpenAIRE

    Ergun, Hilal; Sert, Mustafa

    2016-01-01

    Deep learning architectures are showing great promise in various computer vision domains including image classification, object detection, event detection and action recognition. In this study, we investigate various aspects of convolutional neural networks (CNNs) from the big data perspective. We analyze recent studies and different network architectures both in terms of running time and accuracy. We present extensive empirical information along with best practices for big data practitioners...

  9. An obstruction for q-deformation of the convolution product

    CERN Document Server

    Van Leeuwen, H; van Leeuwen, Hans; Maassen, Hans

    1995-01-01

    We consider two independent q-Gaussian random variables X and Y and a function f chosen in such a way that f(X) and X have the same distribution. For 0 < q < 1 we find that at least the fourth moments of X + Y and f(X) + Y are different. We conclude that no q-deformed convolution product can exist for functions of independent q-Gaussian random variables.

  10. Image interpolation by two-dimensional parametric cubic convolution.

    Science.gov (United States)

    Shi, Jiazheng; Reichenbach, Stephen E

    2006-07-01

    Cubic convolution is a popular method for image interpolation. Traditionally, the piecewise-cubic kernel has been derived in one dimension with one parameter and applied to two-dimensional (2-D) images in a separable fashion. However, images typically are statistically nonseparable, which motivates this investigation of nonseparable cubic convolution. This paper derives two new nonseparable, 2-D cubic-convolution kernels. The first kernel, with three parameters (designated 2D-3PCC), is the most general 2-D, piecewise-cubic interpolator defined on [-2, 2] x [-2, 2] with constraints for biaxial symmetry, diagonal (or 90 degrees rotational) symmetry, continuity, and smoothness. The second kernel, with five parameters (designated 2D-5PCC), relaxes the constraint of diagonal symmetry, based on the observation that many images have rotationally asymmetric statistical properties. This paper also develops a closed-form solution for determining the optimal parameter values for parametric cubic-convolution kernels with respect to ensembles of scenes characterized by autocorrelation (or power spectrum). This solution establishes a practical foundation for adaptive interpolation based on local autocorrelation estimates. Quantitative fidelity analyses and visual experiments indicate that these new methods can outperform several popular interpolation methods. An analysis of the error budgets for reconstruction error associated with blurring and aliasing illustrates that the methods improve interpolation fidelity for images with aliased components. For images with little or no aliasing, the methods yield results similar to other popular methods. Both 2D-3PCC and 2D-5PCC are low-order polynomials with small spatial support and so are easy to implement and efficient to apply.

  11. GPU Acceleration of Image Convolution using Spatially-varying Kernel

    OpenAIRE

    Hartung, Steven; Shukla, Hemant; Miller, J. Patrick; Pennypacker, Carlton

    2012-01-01

    Image subtraction in astronomy is a tool for transient object discovery such as asteroids, extra-solar planets and supernovae. To match point spread functions (PSFs) between images of the same field taken at different times a convolution technique is used. Particularly suitable for large-scale images is a computationally intensive spatially-varying kernel. The underlying algorithm is inherently massively parallel due to unique kernel generation at every pixel location. The spatially-varying k...

  12. Research of Direct Digital Correlative-Interferometric Radio Direction Finder with Double Correlation-convolutional Processing

    Directory of Open Access Journals (Sweden)

    V. V. Tsyporenko

    2016-06-01

    Full Text Available Introduction. In this article the unsettled part of the general problem of the research of direct digital methods of correlative-interferometric direction-finding was solved. The purpose of the article is to optimize the direction-finding of the direct digital correlative-interferometric radio direction finder with double correlation-convolutional processing by its exactness. Fundamentals of researches. As a result of the conducted researches it was defined that the basic parameter of equalization of dispersion of error of estimation of direction on the source of radio radiation for the explored radio direction finder, which ought to be optimized, is the size of frequency converting change. Optimization. It was conducted the parametrical optimization of the direct digital correlative-interferometric radio direction finder with double correlation-convolutional processing by its exactness. As a result of the modelling the dependence of middle quadratic deflection of estimation of direction from the relation of signal/noise for the different possible values of circular frequency converting shift was obtained. Conclusions. The analytical calculations and the results of the modelling fully coincided, that confirmed the rightness of the researches and the authenticity of the results of optimization.

  13. Dense Semantic Labeling of Subdecimeter Resolution Images With Convolutional Neural Networks

    Science.gov (United States)

    Volpi, Michele; Tuia, Devis

    2017-02-01

    Semantic labeling (or pixel-level land-cover classification) in ultra-high resolution imagery (Neural Networks (CNNs) achieve this goal by learning discriminatively a hierarchy of representations of increasing abstraction. In this paper we present a CNN-based system relying on an downsample-then-upsample architecture. Specifically, it first learns a rough spatial map of high-level representations by means of convolutions and then learns to upsample them back to the original resolution by deconvolutions. By doing so, the CNN learns to densely label every pixel at the original resolution of the image. This results in many advantages, including i) state-of-the-art numerical accuracy, ii) improved geometric accuracy of predictions and iii) high efficiency at inference time. We test the proposed system on the Vaihingen and Potsdam sub-decimeter resolution datasets, involving semantic labeling of aerial images of 9cm and 5cm resolution, respectively. These datasets are composed by many large and fully annotated tiles allowing an unbiased evaluation of models making use of spatial information. We do so by comparing two standard CNN architectures to the proposed one: standard patch classification, prediction of local label patches by employing only convolutions and full patch labeling by employing deconvolutions. All the systems compare favorably or outperform a state-of-the-art baseline relying on superpixels and powerful appearance descriptors. The proposed full patch labeling CNN outperforms these models by a large margin, also showing a very appealing inference time.

  14. Cerebral vessels segmentation for light-sheet microscopy image using convolutional neural networks

    Science.gov (United States)

    Hu, Chaoen; Hui, Hui; Wang, Shuo; Dong, Di; Liu, Xia; Yang, Xin; Tian, Jie

    2017-03-01

    Cerebral vessel segmentation is an important step in image analysis for brain function and brain disease studies. To extract all the cerebrovascular patterns, including arteries and capillaries, some filter-based methods are used to segment vessels. However, the design of accurate and robust vessel segmentation algorithms is still challenging, due to the variety and complexity of images, especially in cerebral blood vessel segmentation. In this work, we addressed a problem of automatic and robust segmentation of cerebral micro-vessels structures in cerebrovascular images acquired by light-sheet microscope for mouse. To segment micro-vessels in large-scale image data, we proposed a convolutional neural networks (CNNs) architecture trained by 1.58 million pixels with manual label. Three convolutional layers and one fully connected layer were used in the CNNs model. We extracted a patch of size 32x32 pixels in each acquired brain vessel image as training data set to feed into CNNs for classification. This network was trained to output the probability that the center pixel of input patch belongs to vessel structures. To build the CNNs architecture, a series of mouse brain vascular images acquired from a commercial light sheet fluorescence microscopy (LSFM) system were used for training the model. The experimental results demonstrated that our approach is a promising method for effectively segmenting micro-vessels structures in cerebrovascular images with vessel-dense, nonuniform gray-level and long-scale contrast regions.

  15. Fine-grained representation learning in convolutional autoencoders

    Science.gov (United States)

    Luo, Chang; Wang, Jie

    2016-03-01

    Convolutional autoencoders (CAEs) have been widely used as unsupervised feature extractors for high-resolution images. As a key component in CAEs, pooling is a biologically inspired operation to achieve scale and shift invariances, and the pooled representation directly affects the CAEs' performance. Fine-grained pooling, which uses small and dense pooling regions, encodes fine-grained visual cues and enhances local characteristics. However, it tends to be sensitive to spatial rearrangements. In most previous works, pooled features were obtained by empirically modulating parameters in CAEs. We see the CAE as a whole and propose a fine-grained representation learning law to extract better fine-grained features. This representation learning law suggests two directions for improvement. First, we probabilistically evaluate the discrimination-invariance tradeoff with fine-grained granularity in the pooled feature maps, and suggest the proper filter scale in the convolutional layer and appropriate whitening parameters in preprocessing step. Second, pooling approaches are combined with the sparsity degree in pooling regions, and we propose the preferable pooling approach. Experimental results on two independent benchmark datasets demonstrate that our representation learning law could guide CAEs to extract better fine-grained features and performs better in multiclass classification task. This paper also provides guidance for selecting appropriate parameters to obtain better fine-grained representation in other convolutional neural networks.

  16. Convolution-based estimation of organ dose in tube current modulated CT.

    Science.gov (United States)

    Tian, Xiaoyu; Segars, W Paul; Dixon, Robert L; Samei, Ehsan

    2016-05-21

    Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ([Formula: see text]) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate [Formula: see text] values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying [Formula: see text] with the organ dose coefficients ([Formula: see text]). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The discrepancy between the estimated organ dose and dose simulated using TCM Monte Carlo program was quantified. We further compared the

  17. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    Science.gov (United States)

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612

  18. Emotional textile image classification based on cross-domain convolutional sparse autoencoders with feature selection

    Science.gov (United States)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin

    2017-01-01

    We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.

  19. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    Directory of Open Access Journals (Sweden)

    Francisco Javier Ordóñez

    2016-01-01

    Full Text Available Human activity recognition (HAR tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i is suitable for multimodal wearable sensors; (ii can perform sensor fusion naturally; (iii does not require expert knowledge in designing features; and (iv explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation.

  20. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.

    Science.gov (United States)

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-18

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation.

  1. Super-resolution reconstruction algorithm based on adaptive convolution kernel size selection

    Science.gov (United States)

    Gao, Hang; Chen, Qian; Sui, Xiubao; Zeng, Junjie; Zhao, Yao

    2016-09-01

    Restricted by the detector technology and optical diffraction limit, the spatial resolution of infrared imaging system is difficult to achieve significant improvement. Super-Resolution (SR) reconstruction algorithm is an effective way to solve this problem. Among them, the SR algorithm based on multichannel blind deconvolution (MBD) estimates the convolution kernel only by low resolution observation images, according to the appropriate regularization constraints introduced by a priori assumption, to realize the high resolution image restoration. The algorithm has been shown effective when each channel is prime. In this paper, we use the significant edges to estimate the convolution kernel and introduce an adaptive convolution kernel size selection mechanism, according to the uncertainty of the convolution kernel size in MBD processing. To reduce the interference of noise, we amend the convolution kernel in an iterative process, and finally restore a clear image. Experimental results show that the algorithm can meet the convergence requirement of the convolution kernel estimation.

  2. Flare Occurrence Prediction based on Convolution Neural Network using SOHO MDI data

    Science.gov (United States)

    Yi, Kangwoo; Moon, Yong-Jae; Park, Eunsu; Shin, Seulki

    2017-08-01

    In this study we apply Convolution Neural Network(CNN) to solar flare occurrence prediction with various parameter options using the 00:00 UT MDI images from 1996 to 2010 (total 4962 images). We assume that only X, M and C class flares correspond to “flare occurrence” and the others to “non-flare”. We have attempted to look for the best options for the models with two CNN pre-trained models (AlexNet and GoogLeNet), by modifying training images and changing hyper parameters. Our major results from this study are as follows. First, the flare occurrence predictions are relatively good with about 80 % accuracies. Second, both flare prediction models based on AlexNet and GoogLeNet have similar results but AlexNet is faster than GoogLeNet. Third, modifying the training images to reduce the projection effect is not effective.

  3. Bioprinting of 3D Convoluted Renal Proximal Tubules on Perfusable Chips

    Science.gov (United States)

    Homan, Kimberly A.; Kolesky, David B.; Skylar-Scott, Mark A.; Herrmann, Jessica; Obuobi, Humphrey; Moisan, Annie; Lewis, Jennifer A.

    2016-10-01

    Three-dimensional models of kidney tissue that recapitulate human responses are needed for drug screening, disease modeling, and, ultimately, kidney organ engineering. Here, we report a bioprinting method for creating 3D human renal proximal tubules in vitro that are fully embedded within an extracellular matrix and housed in perfusable tissue chips, allowing them to be maintained for greater than two months. Their convoluted tubular architecture is circumscribed by proximal tubule epithelial cells and actively perfused through the open lumen. These engineered 3D proximal tubules on chip exhibit significantly enhanced epithelial morphology and functional properties relative to the same cells grown on 2D controls with or without perfusion. Upon introducing the nephrotoxin, Cyclosporine A, the epithelial barrier is disrupted in a dose-dependent manner. Our bioprinting method provides a new route for programmably fabricating advanced human kidney tissue models on demand.

  4. The double Mellin-Barnes type integrals and their applications to convolution theory

    CERN Document Server

    Hai, Nguyen Thanh

    1992-01-01

    This book presents new results in the theory of the double Mellin-Barnes integrals popularly known as the general H-function of two variables.A general integral convolution is constructed by the authors and it contains Laplace convolution as a particular case and possesses a factorization property for one-dimensional H-transform. Many examples of convolutions for classical integral transforms are obtained and they can be applied for the evaluation of series and integrals.

  5. An area-efficient 2-D convolution implementation on FPGA for space applications

    OpenAIRE

    Gambardella, Giulio; Tiotto, Gabriele; Prinetto, Paolo Ernesto; Di Carlo, Stefano; Indaco, Marco; Rolfo, Daniele

    2011-01-01

    The 2-D Convolution is an algorithm widely used in image and video processing. Although its computation is simple, its implementation requires a high computational power and an intensive use of memory. Field Programmable Gate Arrays (FPGA) architectures were proposed to accelerate calculations of 2-D Convolution and the use of buffers implemented on FPGAs are used to avoid direct memory access. In this paper we present an implementation of the 2-D Convolution algorithm on a FPGA architecture ...

  6. IMAGE DE-BLURRING USING WIENER DE-CONVOLUTION AND WAVELET FOR DIFFERENT BLURRING KERNEL

    OpenAIRE

    M.Tech Research Scholar Shuchi Singh*, Asst Professor Vipul Awasthi, Asst Professor NitinSahu

    2016-01-01

    Image de-convolution is an active research area of recovering a sharp image after blurring by a convolution. One of the problems in image de-convolution is how to preserve the texture structures while removing blur in presence of noise. Various methods have been used for such as gradient based methods, sparsity based methods, and nonlocal self-similarity methods. In this thesis, we have used the conventional non-blind method of Wiener de-convolution. Further Wavelet denoising has been used to...

  7. Automatic sleep stage classification of single-channel EEG by using complex-valued convolutional neural network.

    Science.gov (United States)

    Zhang, Junming; Wu, Yan

    2017-02-21

    Many systems are developed for automatic sleep stage classification. However, nearly all models are based on handcrafted features. Because of the large feature space, there are so many features that feature selection should be used. Meanwhile, designing handcrafted features is a difficult and time-consuming task because the feature designing needs domain knowledge of experienced experts. Results vary when different sets of features are chosen to identify sleep stages. Additionally, many features that we may be unaware of exist. However, these features may be important for sleep stage classification. Therefore, a new sleep stage classification system, which is based on the complex-valued convolutional neural network (CCNN), is proposed in this study. Unlike the existing sleep stage methods, our method can automatically extract features from raw electroencephalography data and then classify sleep stage based on the learned features. Additionally, we also prove that the decision boundaries for the real and imaginary parts of a complex-valued convolutional neuron intersect orthogonally. The classification performances of handcrafted features are compared with those of learned features via CCNN. Experimental results show that the proposed method is comparable to the existing methods. CCNN obtains a better classification performance and considerably faster convergence speed than convolutional neural network. Experimental results also show that the proposed method is a useful decision-support tool for automatic sleep stage classification.

  8. Calculating dose distributions and wedge factors for photon treatment fields with dynamic wedges based on a convolution/superposition method.

    Science.gov (United States)

    Liu, H H; McCullough, E C; Mackie, T R

    1998-01-01

    A convolution/superposition based method was developed to calculate dose distributions and wedge factors in photon treatment fields generated by dynamic wedges. This algorithm used a dual source photon beam model that accounted for both primary photons from the target and secondary photons scattered from the machine head. The segmented treatment tables (STT) were used to calculate realistic photon fluence distributions in the wedged fields. The inclusion of the extra-focal photons resulted in more accurate dose calculation in high dose gradient regions, particularly in the beam penumbra. The wedge factors calculated using the convolution method were also compared to the measured data and showed good agreement within 0.5%. The wedge factor varied significantly with the field width along the moving jaw direction, but not along the static jaw or the depth direction. This variation was found to be determined by the ending position of the moving jaw, or the STT of the dynamic wedge. In conclusion, the convolution method proposed in this work can be used to accurately compute dose for a dynamic or an intensity modulated treatment based on the fluence modulation in the treatment field.

  9. The convoluted evolution of snail chirality

    Science.gov (United States)

    Schilthuizen, M.; Davison, A.

    2005-11-01

    The direction that a snail (Mollusca: Gastropoda) coils, whether dextral (right-handed) or sinistral (left-handed), originates in early development but is most easily observed in the shell form of the adult. Here, we review recent progress in understanding snail chirality from genetic, developmental and ecological perspectives. In the few species that have been characterized, chirality is determined by a single genetic locus with delayed inheritance, which means that the genotype is expressed in the mother's offspring. Although research lags behind the studies of asymmetry in the mouse and nematode, attempts to isolate the loci involved in snail chirality have begun, with the final aim of understanding how the axis of left-right asymmetry is established. In nature, most snail taxa (>90%) are dextral, but sinistrality is known from mutant individuals, populations within dextral species, entirely sinistral species, genera and even families. Ordinarily, it is expected that strong frequency-dependent selection should act against the establishment of new chiral types because the chiral minority have difficulty finding a suitable mating partner (their genitalia are on the ‘wrong’ side). Mixed populations should therefore not persist. Intriguingly, however, a very few land snail species, notably the subgenus Amphidromus sensu stricto, not only appear to mate randomly between different chiral types, but also have a stable, within-population chiral dimorphism, which suggests the involvement of a balancing factor. At the other end of the spectrum, in many species, different chiral types are unable to mate and so could be reproductively isolated from one another. However, while empirical data, models and simulations have indicated that chiral reversal must sometimes occur, it is rarely likely to lead to so-called ‘single-gene’ speciation. Nevertheless, chiral reversal could still be a contributing factor to speciation (or to divergence after speciation) when

  10. Finding strong lenses in CFHTLS using convolutional neural networks

    Science.gov (United States)

    Jacobs, C.; Glazebrook, K.; Collett, T.; More, A.; McCarthy, C.

    2017-10-01

    We train and apply convolutional neural networks, a machine learning technique developed to learn from and classify image data, to Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) imaging for the identification of potential strong lensing systems. An ensemble of four convolutional neural networks was trained on images of simulated galaxy-galaxy lenses. The training sets consisted of a total of 62 406 simulated lenses and 64 673 non-lens negative examples generated with two different methodologies. An ensemble of trained networks was applied to all of the 171 deg2 of the CFHTLS wide field image data, identifying 18 861 candidates including 63 known and 139 other potential lens candidates. A second search of 1.4 million early-type galaxies selected from the survey catalogue as potential deflectors, identified 2465 candidates including 117 previously known lens candidates, 29 confirmed lenses/high-quality lens candidates, 266 novel probable or potential lenses and 2097 candidates we classify as false positives. For the catalogue-based search we estimate a completeness of 21-28 per cent with respect to detectable lenses and a purity of 15 per cent, with a false-positive rate of 1 in 671 images tested. We predict a human astronomer reviewing candidates produced by the system would identify 20 probable lenses and 100 possible lenses per hour in a sample selected by the robot. Convolutional neural networks are therefore a promising tool for use in the search for lenses in current and forthcoming surveys such as the Dark Energy Survey and the Large Synoptic Survey Telescope.

  11. Training Convolutional Neural Networks for Translational Invariance on SAR ATR

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Engholm, Rasmus; Østergaard Pedersen, Morten

    2016-01-01

    In this paper we present a comparison of the robustness of Convolutional Neural Networks (CNN) to other classifiers in the presence of uncertainty of the objects localization in SAR image. We present a framework for simulating simple SAR images, translating the object of interest systematically...... and testing the classification performance. Our results show that where other classification methods are very sensitive to even small translations, CNN is quite robust to translational variance, making it much more useful in relation to Automatic Target Recognition (ATR) in a real life context....

  12. Convolution Algebra for Fluid Modes with Finite Energy

    Science.gov (United States)

    1992-04-01

    PHILLIPS LABORATORY AIR FORCE SYSTEMS COMMAND UNITED STATES AIR FORCE HANSCOM AIR FORCE BASE, MASSACHIUSETTS 01731-5000 94-22604 "This technical report ’-as...with finite spatial and temporal extents. At Boston University, we have developed a full form of wavelet expansion which has the advantage over more...distribution: 00 bX =00 0l if, TZ< VPf (X) = V •a,,,’(x) = E bnb 𔄀(x) where b, =otherwise (34) V=o ,i=o a._, otherwise 7 The convolution of two

  13. Plant species classification using deep convolutional neural network

    DEFF Research Database (Denmark)

    Dyrmann, Mads; Karstoft, Henrik; Midtiby, Henrik Skov

    2016-01-01

    Information on which weed species are present within agricultural fields is important for site specific weed management. This paper presents a method that is capable of recognising plant species in colour images by using a convolutional neural network. The network is built from scratch trained...... stabilisation and illumination, and images shot with hand-held mobile phones in fields with changing lighting conditions and different soil types. For these 22 species, the network is able to achieve a classification accuracy of 86.2%....

  14. Convolutional neural networks for synthetic aperture radar classification

    Science.gov (United States)

    Profeta, Andrew; Rodriguez, Andres; Clouse, H. Scott

    2016-05-01

    For electro-optical object recognition, convolutional neural networks (CNNs) are the state-of-the-art. For large datasets, CNNs are able to learn meaningful features used for classification. However, their application to synthetic aperture radar (SAR) has been limited. In this work we experimented with various CNN architectures on the MSTAR SAR dataset. As the input to the CNN we used the magnitude and phase (2 channels) of the SAR imagery. We used the deep learning toolboxes CAFFE and Torch7. Our results show that we can achieve 93% accuracy on the MSTAR dataset using CNNs.

  15. A Fortran 90 code for magnetohydrodynamics. Part 1, Banded convolution

    Energy Technology Data Exchange (ETDEWEB)

    Walker, D.W.

    1992-03-01

    This report describes progress in developing a Fortran 90 version of the KITE code for studying plasma instabilities in Tokamaks. In particular, the evaluation of convolution terms appearing in the numerical solution is discussed, and timing results are presented for runs performed on an 8k processor Connection Machine (CM-2). Estimates of the performance on a full-size 64k CM-2 are given, and range between 100 and 200 Mflops. The advantages of having a Fortran 90 version of the KITE code are stressed, and the future use of such a code on the newly announced CM5 and Paragon computers, from Thinking Machines Corporation and Intel, is considered.

  16. Image reconstruction of simulated specimens using convolution back projection

    Directory of Open Access Journals (Sweden)

    Mohd. Farhan Manzoor

    2001-04-01

    Full Text Available This paper reports about the reconstruction of cross-sections of composite structures. The convolution back projection (CBP algorithm has been used to capture the attenuation field over the specimen. Five different test cases have been taken up for evaluation. These cases represent varying degrees of complexity. In addition, the role of filters on the nature of the reconstruction errors has also been discussed. Numerical results obtained in the study reveal that CBP algorithm is a useful tool for qualitative as well as quantitative assessment of composite regions encountered in engineering applications.

  17. Faster GPU-based convolutional gridding via thread coarsening

    CERN Document Server

    Merry, Bruce

    2016-01-01

    Convolutional gridding is a processor-intensive step in interferometric imaging. While it is possible to use graphics processing units (GPUs) to accelerate this operation, existing methods use only a fraction of the available flops. We apply thread coarsening to improve the efficiency of an existing algorithm, and observe performance gains of up to $3.2\\times$ for single-polarization gridding and $1.9\\times$ for quad-polarization gridding on a GeForce GTX 980, and smaller but still significant gains on a Radeon R9 290X.

  18. Convolution seal for transition duct in turbine system

    Energy Technology Data Exchange (ETDEWEB)

    Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray

    2015-03-10

    A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface member for interfacing with a turbine section. The turbine system further includes a convolution seal contacting the interface member to provide a seal between the interface member and the turbine section.

  19. Convolution seal for transition duct in turbine system

    Energy Technology Data Exchange (ETDEWEB)

    Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray

    2015-05-26

    A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface feature for interfacing with an adjacent transition duct. The turbine system further includes a convolution seal contacting the interface feature to provide a seal between the interface feature and the adjacent transition duct.

  20. Tandem mass spectrometry data quality assessment by self-convolution

    Directory of Open Access Journals (Sweden)

    Tham Wai

    2007-09-01

    Full Text Available Abstract Background Many algorithms have been developed for deciphering the tandem mass spectrometry (MS data sets. They can be essentially clustered into two classes. The first performs searches on theoretical mass spectrum database, while the second based itself on de novo sequencing from raw mass spectrometry data. It was noted that the quality of mass spectra affects significantly the protein identification processes in both instances. This prompted the authors to explore ways to measure the quality of MS data sets before subjecting them to the protein identification algorithms, thus allowing for more meaningful searches and increased confidence level of proteins identified. Results The proposed method measures the qualities of MS data sets based on the symmetric property of b- and y-ion peaks present in a MS spectrum. Self-convolution on MS data and its time-reversal copy was employed. Due to the symmetric nature of b-ions and y-ions peaks, the self-convolution result of a good spectrum would produce a highest mid point intensity peak. To reduce processing time, self-convolution was achieved using Fast Fourier Transform and its inverse transform, followed by the removal of the "DC" (Direct Current component and the normalisation of the data set. The quality score was defined as the ratio of the intensity at the mid point to the remaining peaks of the convolution result. The method was validated using both theoretical mass spectra, with various permutations, and several real MS data sets. The results were encouraging, revealing a high percentage of positive prediction rates for spectra with good quality scores. Conclusion We have demonstrated in this work a method for determining the quality of tandem MS data set. By pre-determining the quality of tandem MS data before subjecting them to protein identification algorithms, spurious protein predictions due to poor tandem MS data are avoided, giving scientists greater confidence in the

  1. Visualizing Vector Fields Using Line Integral Convolution and Dye Advection

    Science.gov (United States)

    Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu

    1996-01-01

    We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.

  2. Faster GPU-based convolutional gridding via thread coarsening

    Science.gov (United States)

    Merry, B.

    2016-07-01

    Convolutional gridding is a processor-intensive step in interferometric imaging. While it is possible to use graphics processing units (GPUs) to accelerate this operation, existing methods use only a fraction of the available flops. We apply thread coarsening to improve the efficiency of an existing algorithm, and observe performance gains of up to 3.2 × for single-polarization gridding and 1.9 × for quad-polarization gridding on a GeForce GTX 980, and smaller but still significant gains on a Radeon R9 290X.

  3. Low-dose CT denoising with convolutional neural network

    CERN Document Server

    Chen, Hu; Zhang, Weihua; Liao, Peixi; Li, Ke; Zhou, Jiliu; Wang, Ge

    2016-01-01

    To reduce the potential radiation risk, low-dose CT has attracted much attention. However, simply lowering the radiation dose will lead to significant deterioration of the image quality. In this paper, we propose a noise reduction method for low-dose CT via deep neural network without accessing original projection data. A deep convolutional neural network is trained to transform low-dose CT images towards normal-dose CT images, patch by patch. Visual and quantitative evaluation demonstrates a competing performance of the proposed method.

  4. A Shortest Dependency Path Based Convolutional Neural Network for Protein-Protein Relation Extraction

    Directory of Open Access Journals (Sweden)

    Lei Hua

    2016-01-01

    Full Text Available The state-of-the-art methods for protein-protein interaction (PPI extraction are primarily based on kernel methods, and their performances strongly depend on the handcraft features. In this paper, we tackle PPI extraction by using convolutional neural networks (CNN and propose a shortest dependency path based CNN (sdpCNN model. The proposed method (1 only takes the sdp and word embedding as input and (2 could avoid bias from feature selection by using CNN. We performed experiments on standard Aimed and BioInfer datasets, and the experimental results demonstrated that our approach outperformed state-of-the-art kernel based methods. In particular, by tracking the sdpCNN model, we find that sdpCNN could extract key features automatically and it is verified that pretrained word embedding is crucial in PPI task.

  5. A Shortest Dependency Path Based Convolutional Neural Network for Protein-Protein Relation Extraction.

    Science.gov (United States)

    Hua, Lei; Quan, Chanqin

    2016-01-01

    The state-of-the-art methods for protein-protein interaction (PPI) extraction are primarily based on kernel methods, and their performances strongly depend on the handcraft features. In this paper, we tackle PPI extraction by using convolutional neural networks (CNN) and propose a shortest dependency path based CNN (sdpCNN) model. The proposed method (1) only takes the sdp and word embedding as input and (2) could avoid bias from feature selection by using CNN. We performed experiments on standard Aimed and BioInfer datasets, and the experimental results demonstrated that our approach outperformed state-of-the-art kernel based methods. In particular, by tracking the sdpCNN model, we find that sdpCNN could extract key features automatically and it is verified that pretrained word embedding is crucial in PPI task.

  6. An adaptive deep convolutional neural network for rolling bearing fault diagnosis

    Science.gov (United States)

    Fuan, Wang; Hongkai, Jiang; Haidong, Shao; Wenjing, Duan; Shuaipeng, Wu

    2017-09-01

    The working conditions of rolling bearings usually is very complex, which makes it difficult to diagnose rolling bearing faults. In this paper, a novel method called the adaptive deep convolutional neural network (CNN) is proposed for rolling bearing fault diagnosis. Firstly, to get rid of manual feature extraction, the deep CNN model is initialized for automatic feature learning. Secondly, to adapt to different signal characteristics, the main parameters of the deep CNN model are determined with a particle swarm optimization method. Thirdly, to evaluate the feature learning ability of the proposed method, t-distributed stochastic neighbor embedding (t-SNE) is further adopted to visualize the hierarchical feature learning process. The proposed method is applied to diagnose rolling bearing faults, and the results confirm that the proposed method is more effective and robust than other intelligent methods.

  7. Object class segmentation of RGB-D video using recurrent convolutional neural networks.

    Science.gov (United States)

    Pavel, Mircea Serban; Schulz, Hannes; Behnke, Sven

    2017-04-01

    Object class segmentation is a computer vision task which requires labeling each pixel of an image with the class of the object it belongs to. Deep convolutional neural networks (DNN) are able to learn and take advantage of local spatial correlations required for this task. They are, however, restricted by their small, fixed-sized filters, which limits their ability to learn long-range dependencies. Recurrent Neural Networks (RNN), on the other hand, do not suffer from this restriction. Their iterative interpretation allows them to model long-range dependencies by propagating activity. This property is especially useful when labeling video sequences, where both spatial and temporal long-range dependencies occur. In this work, a novel RNN architecture for object class segmentation is presented. We investigate several ways to train such a network. We evaluate our models on the challenging NYU Depth v2 dataset for object class segmentation and obtain competitive results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Time and Frequency Domain Optimization with Shift, Convolution and Smoothness in Factor Analysis Type Decompositions

    DEFF Research Database (Denmark)

    Madsen, Kristoffer Hougaard; Hansen, Lars Kai; Mørup, Morten

    2009-01-01

    We propose the Time Frequency Gradient Method (TFGM) which forms a framework for optimization of models that are constrained in the time domain while having efficient representations in the frequency domain. Since the constraints in the time domain in general are not transparent in a frequency...... representation we demonstrate how the class of objective functions that are separable in either time or frequency instances allow the gradient in the time or frequency domain to be converted to the opposing domain. We further demonstrate the usefulness of this framework for three different models; Shifted Non......-negative Matrix Factorization, Convolutive Sparse Coding as well as Smooth and Sparse Matrix Factorization. Matlab implementation of the proposed algorithms are available for download at www.erpwavelab.org....

  9. Digital image correlation based on a fast convolution strategy

    Science.gov (United States)

    Yuan, Yuan; Zhan, Qin; Xiong, Chunyang; Huang, Jianyong

    2017-10-01

    In recent years, the efficiency of digital image correlation (DIC) methods has attracted increasing attention because of its increasing importance for many engineering applications. Based on the classical affine optical flow (AOF) algorithm and the well-established inverse compositional Gauss-Newton algorithm, which is essentially a natural extension of the AOF algorithm under a nonlinear iterative framework, this paper develops a set of fast convolution-based DIC algorithms for high-efficiency subpixel image registration. Using a well-developed fast convolution technique, the set of algorithms establishes a series of global data tables (GDTs) over the digital images, which allows the reduction of the computational complexity of DIC significantly. Using the pre-calculated GDTs, the subpixel registration calculations can be implemented efficiently in a look-up-table fashion. Both numerical simulation and experimental verification indicate that the set of algorithms significantly enhances the computational efficiency of DIC, especially in the case of a dense data sampling for the digital images. Because the GDTs need to be computed only once, the algorithms are also suitable for efficiently coping with image sequences that record the time-varying dynamics of specimen deformations.

  10. Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks.

    Science.gov (United States)

    Dosovitskiy, Alexey; Fischer, Philipp; Springenberg, Jost Tobias; Riedmiller, Martin; Brox, Thomas

    2016-09-01

    Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convolutional network using only unlabeled data. To this end, we train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. In contrast to supervised network training, the resulting feature representation is not class specific. It rather provides robustness to the transformations that have been applied during training. This generic feature representation allows for classification results that outperform the state of the art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101, Caltech-256). While features learned with our approach cannot compete with class specific features from supervised training on a classification task, we show that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor.

  11. Robust Visual Tracking via Convolutional Networks Without Training.

    Science.gov (United States)

    Kaihua Zhang; Qingshan Liu; Yi Wu; Ming-Hsuan Yang

    2016-04-01

    Deep networks have been successfully applied to visual tracking by learning a generic representation offline from numerous training images. However, the offline training is time-consuming and the learned generic representation may be less discriminative for tracking specific objects. In this paper, we present that, even without offline training with a large amount of auxiliary data, simple two-layer convolutional networks can be powerful enough to learn robust representations for visual tracking. In the first frame, we extract a set of normalized patches from the target region as fixed filters, which integrate a series of adaptive contextual filters surrounding the target to define a set of feature maps in the subsequent frames. These maps measure similarities between each filter and useful local intensity patterns across the target, thereby encoding its local structural information. Furthermore, all the maps together form a global representation, via which the inner geometric layout of the target is also preserved. A simple soft shrinkage method that suppresses noisy values below an adaptive threshold is employed to de-noise the global representation. Our convolutional networks have a lightweight structure and perform favorably against several state-of-the-art methods on the recent tracking benchmark data set with 50 challenging videos.

  12. Convolutional neural network features based change detection in satellite images

    Science.gov (United States)

    Mohammed El Amin, Arabi; Liu, Qingjie; Wang, Yunhong

    2016-07-01

    With the popular use of high resolution remote sensing (HRRS) satellite images, a huge research efforts have been placed on change detection (CD) problem. An effective feature selection method can significantly boost the final result. While hand-designed features have proven difficulties to design features that effectively capture high and mid-level representations, the recent developments in machine learning (Deep Learning) omit this problem by learning hierarchical representation in an unsupervised manner directly from data without human intervention. In this letter, we propose approaching the change detection problem from a feature learning perspective. A novel deep Convolutional Neural Networks (CNN) features based HR satellite images change detection method is proposed. The main guideline is to produce a change detection map directly from two images using a pretrained CNN. This method can omit the limited performance of hand-crafted features. Firstly, CNN features are extracted through different convolutional layers. Then, a concatenation step is evaluated after an normalization step, resulting in a unique higher dimensional feature map. Finally, a change map was computed using pixel-wise Euclidean distance. Our method has been validated on real bitemporal HRRS satellite images according to qualitative and quantitative analyses. The results obtained confirm the interest of the proposed method.

  13. Real-Time Video Convolutional Face Finder on Embedded Platforms

    Directory of Open Access Journals (Sweden)

    Franck Mamalet

    2007-03-01

    Full Text Available A high-level optimization methodology is applied for implementing the well-known convolutional face finder (CFF algorithm for real-time applications on mobile phones, such as teleconferencing, advanced user interfaces, image indexing, and security access control. CFF is based on a feature extraction and classification technique which consists of a pipeline of convolutions and subsampling operations. The design of embedded systems requires a good trade-off between performance and code size due to the limited amount of available resources. The followed methodology copes with the main drawbacks of the original implementation of CFF such as floating-point computation and memory allocation, in order to allow parallelism exploitation and perform algorithm optimizations. Experimental results show that our embedded face detection system can accurately locate faces with less computational load and memory cost. It runs on a 275 MHz Starcore DSP at 35 QCIF images/s with state-of-the-art detection rates and very low false alarm rates.

  14. A Mathematical Motivation for Complex-Valued Convolutional Networks.

    Science.gov (United States)

    Tygert, Mark; Bruna, Joan; Chintala, Soumith; LeCun, Yann; Piantino, Serkan; Szlam, Arthur

    2016-05-01

    A complex-valued convolutional network (convnet) implements the repeated application of the following composition of three operations, recursively applying the composition to an input vector of nonnegative real numbers: (1) convolution with complex-valued vectors, followed by (2) taking the absolute value of every entry of the resulting vectors, followed by (3) local averaging. For processing real-valued random vectors, complex-valued convnets can be viewed as data-driven multiscale windowed power spectra, data-driven multiscale windowed absolute spectra, data-driven multiwavelet absolute values, or (in their most general configuration) data-driven nonlinear multiwavelet packets. Indeed, complex-valued convnets can calculate multiscale windowed spectra when the convnet filters are windowed complex-valued exponentials. Standard real-valued convnets, using rectified linear units (ReLUs), sigmoidal (e.g., logistic or tanh) nonlinearities, or max pooling, for example, do not obviously exhibit the same exact correspondence with data-driven wavelets (whereas for complex-valued convnets, the correspondence is much more than just a vague analogy). Courtesy of the exact correspondence, the remarkably rich and rigorous body of mathematical analysis for wavelets applies directly to (complex-valued) convnets.

  15. Convolutional Network Coding Based on Matrix Power Series Representation

    CERN Document Server

    Guo, Wangmei; Sun, Qifu Tyler

    2011-01-01

    In this paper, convolutional network coding is formulated by means of matrix power series representation of the local encoding kernel (LEK) matrices and global encoding kernel (GEK) matrices to establish its theoretical fundamentals for practical implementations. From the encoding perspective, the GEKs of a convolutional network code (CNC) are shown to be uniquely determined by its LEK matrix $K(z)$ if $K_0$, the constant coefficient matrix of $K(z)$, is nilpotent. This will simplify the CNC design because a nilpotent $K_0$ suffices to guarantee a unique set of GEKs. Besides, the relation between coding topology and $K(z)$ is also discussed. From the decoding perspective, the main theme is to justify that the first $L+1$ terms of the GEK matrix $F(z)$ at a sink $r$ suffice to check whether the code is decodable at $r$ with delay $L$ and to start decoding if so. The concomitant decoding scheme avoids dealing with $F(z)$, which may contain infinite terms, as a whole and hence reduces the complexity of decodabil...

  16. Fluence-convolution broad-beam (FCBB) dose calculation.

    Science.gov (United States)

    Lu, Weiguo; Chen, Mingli

    2010-12-07

    IMRT optimization requires a fast yet relatively accurate algorithm to calculate the iteration dose with small memory demand. In this paper, we present a dose calculation algorithm that approaches these goals. By decomposing the infinitesimal pencil beam (IPB) kernel into the central axis (CAX) component and lateral spread function (LSF) and taking the beam's eye view (BEV), we established a non-voxel and non-beamlet-based dose calculation formula. Both LSF and CAX are determined by a commissioning procedure using the collapsed-cone convolution/superposition (CCCS) method as the standard dose engine. The proposed dose calculation involves a 2D convolution of a fluence map with LSF followed by ray tracing based on the CAX lookup table with radiological distance and divergence correction, resulting in complexity of O(N(3)) both spatially and temporally. This simple algorithm is orders of magnitude faster than the CCCS method. Without pre-calculation of beamlets, its implementation is also orders of magnitude smaller than the conventional voxel-based beamlet-superposition (VBS) approach. We compared the presented algorithm with the CCCS method using simulated and clinical cases. The agreement was generally within 3% for a homogeneous phantom and 5% for heterogeneous and clinical cases. Combined with the 'adaptive full dose correction', the algorithm is well suitable for calculating the iteration dose during IMRT optimization.

  17. Asymptotic formula for the moments of Bernoulli convolutions

    Directory of Open Access Journals (Sweden)

    E. A. Timofeev

    2016-01-01

    Full Text Available Abstract. Asymptotic Formula for the Moments of Bernoulli Convolutions Timofeev E. A. Received February 8, 2016 For each λ, 0 < λ < 1, we define a random variable ∞ Yλ =(1−λξnλn, n=0 where ξn are independent random variables with P{ξn =0}=P{ξn =1}= 1. 2 The distribution of Yλ is called a symmetric Bernoulli convolution. The main result of this paper is Mn =EYλn =nlogλ22logλ(1−λ+0.5logλ2−0.5eτ(−logλn1+O(n−0.99, where is a 1-periodic function, 1k2πikx τ(x= kα −lnλ e k̸=0 1 (1 − λ2πit(1 − 22πitπ−2πit2−2πitζ(2πit, 2i sh(π2t α(t = − and ζ(z is the Riemann zeta function. The article is published in the author’s wording.

  18. Photon beam convolution using polyenergetic energy deposition kernels

    Energy Technology Data Exchange (ETDEWEB)

    Hoban, P.W.; Murray, D.C.; Round, W.H. (Waikato Univ., Hamilton (New Zealand). Dept. of Physics)

    1994-04-01

    In photon beam convolution calculations where polyenergetic energy deposition kernels (EDKs) are used, the primary photon energy spectrum should be correctly accounted for in Monte Carlo generation of EDKs. This requires the probability of interaction, determined by the linear attenuation coefficient, [mu], to be taken into account when primary photon interactions are forced to occur at the EDK origin. The use of primary and scattered EDKs generated with a fixed photon spectrum can give rise to an error in the dose calculation due to neglecting the effects of beam hardening with depth. The proportion of primary photon energy that is transferred to secondary electrons increases with depth of interaction, due to the increase in the ratio [mu][sub ab]/[mu] as the beam hardens. Convolution depth-dose curves calculated using polyenergetic EDKs generated for the primary photon spectra which exist at depths of 0, 20 and 40 cm in water, show a fall-off which is too steep when compared with EGS4 Monte Carlo results. A beam hardening correction factor applied to primary and scattered 0 cm EDKs, based on the ratio of kerma to terma at each depth, gives primary, scattered and total dose in good agreement with Monte Carlo results. (Author).

  19. Single-Cell Phenotype Classification Using Deep Convolutional Neural Networks.

    Science.gov (United States)

    Dürr, Oliver; Sick, Beate

    2016-10-01

    Deep learning methods are currently outperforming traditional state-of-the-art computer vision algorithms in diverse applications and recently even surpassed human performance in object recognition. Here we demonstrate the potential of deep learning methods to high-content screening-based phenotype classification. We trained a deep learning classifier in the form of convolutional neural networks with approximately 40,000 publicly available single-cell images from samples treated with compounds from four classes known to lead to different phenotypes. The input data consisted of multichannel images. The construction of appropriate feature definitions was part of the training and carried out by the convolutional network, without the need for expert knowledge or handcrafted features. We compare our results against the recent state-of-the-art pipeline in which predefined features are extracted from each cell using specialized software and then fed into various machine learning algorithms (support vector machine, Fisher linear discriminant, random forest) for classification. The performance of all classification approaches is evaluated on an untouched test image set with known phenotype classes. Compared to the best reference machine learning algorithm, the misclassification rate is reduced from 8.9% to 6.6%.

  20. Enhancing Neutron Beam Production with a Convoluted Moderator

    Energy Technology Data Exchange (ETDEWEB)

    Iverson, Erik B [ORNL; Baxter, David V [Center for the Exploration of Energy and Matter, Indiana University; Muhrer, Guenter [Los Alamos National Laboratory (LANL); Ansell, Stuart [ISIS Facility, Rutherford Appleton Laboratory (ISIS); Gallmeier, Franz X [ORNL; Dalgliesh, Robert [ISIS Facility, Rutherford Appleton Laboratory (ISIS); Lu, Wei [ORNL; Kaiser, Helmut [Center for the Exploration of Energy and Matter, Indiana University

    2014-10-01

    We describe a new concept for a neutron moderating assembly resulting in the more efficient production of slow neutron beams. The Convoluted Moderator, a heterogeneous stack of interleaved moderating material and nearly transparent single-crystal spacers, is a directionally-enhanced neutron beam source, improving beam effectiveness over an angular range comparable to the range accepted by neutron beam lines and guides. We have demonstrated gains of 50% in slow neutron intensity for a given fast neutron production rate while simultaneously reducing the wavelength-dependent emission time dispersion by 25%, both coming from a geometric effect in which the neutron beam lines view a large surface area of moderating material in a relatively small volume. Additionally, we have confirmed a Bragg-enhancement effect arising from coherent scattering within the single-crystal spacers. We have not observed hypothesized refractive effects leading to additional gains at long wavelength. In addition to confirmation of the validity of the Convoluted Moderator concept, our measurements provide a series of benchmark experiments suitable for developing simulation and analysis techniques for practical optimization and eventual implementation at slow neutron source facilities.

  1. An optimal nonorthogonal separation of the anisotropic Gaussian convolution filter.

    Science.gov (United States)

    Lampert, Christoph H; Wirjadi, Oliver

    2006-11-01

    We give an analytical and geometrical treatment of what it means to separate a Gaussian kernel along arbitrary axes in R(n), and we present a separation scheme that allows us to efficiently implement anisotropic Gaussian convolution filters for data of arbitrary dimensionality. Based on our previous analysis we show that this scheme is optimal with regard to the number of memory accesses and interpolation operations needed. The proposed method relies on nonorthogonal convolution axes and works completely in image space. Thus, it avoids the need for a fast Fourier transform (FFT)-subroutine. Depending on the accuracy and speed requirements, different interpolation schemes and methods to implement the one-dimensional Gaussian (finite impulse response and infinite impulse response) can be integrated. Special emphasis is put on analyzing the performance and accuracy of the new method. In particular, we show that without any special optimization of the source code, it can perform anisotropic Gaussian filtering faster than methods relying on the FFT.

  2. Thermalnet: a Deep Convolutional Network for Synthetic Thermal Image Generation

    Science.gov (United States)

    Kniaz, V. V.; Gorbatsevich, V. S.; Mizginov, V. A.

    2017-05-01

    Deep convolutional neural networks have dramatically changed the landscape of the modern computer vision. Nowadays methods based on deep neural networks show the best performance among image recognition and object detection algorithms. While polishing of network architectures received a lot of scholar attention, from the practical point of view the preparation of a large image dataset for a successful training of a neural network became one of major challenges. This challenge is particularly profound for image recognition in wavelengths lying outside the visible spectrum. For example no infrared or radar image datasets large enough for successful training of a deep neural network are available to date in public domain. Recent advances of deep neural networks prove that they are also capable to do arbitrary image transformations such as super-resolution image generation, grayscale image colorisation and imitation of style of a given artist. Thus a natural question arise: how could be deep neural networks used for augmentation of existing large image datasets? This paper is focused on the development of the Thermalnet deep convolutional neural network for augmentation of existing large visible image datasets with synthetic thermal images. The Thermalnet network architecture is inspired by colorisation deep neural networks.

  3. Trainable Convolution Filters and Their Application to Face Recognition.

    Science.gov (United States)

    Kumar, Ritwik; Banerjee, Arunava; Vemuri, Baba C; Pfister, Hanspeter

    2012-07-01

    In this paper, we present a novel image classification system that is built around a core of trainable filter ensembles that we call Volterra kernel classifiers. Our system treats images as a collection of possibly overlapping patches and is composed of three components: (1) A scheme for a single patch classification that seeks a smooth, possibly nonlinear, functional mapping of the patches into a range space, where patches of the same class are close to one another, while patches from different classes are far apart-in the L_2 sense. This mapping is accomplished using trainable convolution filters (or Volterra kernels) where the convolution kernel can be of any shape or order. (2) Given a corpus of Volterra classifiers with various kernel orders and shapes for each patch, a boosting scheme for automatically selecting the best weighted combination of the classifiers to achieve higher per-patch classification rate. (3) A scheme for aggregating the classification information obtained for each patch via voting for the parent image classification. We demonstrate the effectiveness of the proposed technique using face recognition as an application area and provide extensive experiments on the Yale, CMU PIE, Extended Yale B, Multi-PIE, and MERL Dome benchmark face data sets. We call the Volterra kernel classifiers applied to face recognition Volterrafaces. We show that our technique, which falls into the broad class of embedding-based face image discrimination methods, consistently outperforms various state-of-the-art methods in the same category.

  4. Video-based face recognition via convolutional neural networks

    Science.gov (United States)

    Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming

    2017-06-01

    Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.

  5. Real-Time Video Convolutional Face Finder on Embedded Platforms

    Directory of Open Access Journals (Sweden)

    Mamalet Franck

    2007-01-01

    Full Text Available A high-level optimization methodology is applied for implementing the well-known convolutional face finder (CFF algorithm for real-time applications on mobile phones, such as teleconferencing, advanced user interfaces, image indexing, and security access control. CFF is based on a feature extraction and classification technique which consists of a pipeline of convolutions and subsampling operations. The design of embedded systems requires a good trade-off between performance and code size due to the limited amount of available resources. The followed methodology copes with the main drawbacks of the original implementation of CFF such as floating-point computation and memory allocation, in order to allow parallelism exploitation and perform algorithm optimizations. Experimental results show that our embedded face detection system can accurately locate faces with less computational load and memory cost. It runs on a 275 MHz Starcore DSP at 35 QCIF images/s with state-of-the-art detection rates and very low false alarm rates.

  6. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning.

    Science.gov (United States)

    Shin, Hoo-Chang; Roth, Holger R; Gao, Mingchen; Lu, Le; Xu, Ziyue; Nogues, Isabella; Yao, Jianhua; Mollura, Daniel; Summers, Ronald M

    2016-05-01

    Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.

  7. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning

    Science.gov (United States)

    Hoo-Chang, Shin; Roth, Holger R.; Gao, Mingchen; Lu, Le; Xu, Ziyue; Nogues, Isabella; Yao, Jianhua; Mollura, Daniel

    2016-01-01

    Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet) and the revival of deep convolutional neural networks (CNN). CNNs enable learning data-driven, highly representative, layered hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models (supervised) pre-trained from natural image dataset to medical image tasks (although domain transfer between two medical image datasets is also possible). In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computeraided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, with 85% sensitivity at 3 false positive per patient, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance

  8. Using convolutional decoding to improve time delay and phase estimation in digital communications

    Energy Technology Data Exchange (ETDEWEB)

    Ormesher, Richard C. (Albuquerque, NM); Mason, John J. (Albuquerque, NM)

    2010-01-26

    The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.

  9. An upper bound on the number of errors corrected by a convolutional code

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2000-01-01

    The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length.......The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length....

  10. Convolutional Models for Landmine Identification with Ground Penetrating Radar

    NARCIS (Netherlands)

    Roth, F.

    2005-01-01

    This thesis presents new developments in the area of target identification with ground penetrating radar (GPR) intended for the identification of plastic and metal cased antipersonnel (AP) landmines from a single measured GPR return signal, called A-scan. The target identification is formulated as a

  11. Performance of Parallel Concatenated Convolutional Codes (PCCC with BPSK in Nakagami Multipath M-Fading Channel

    Directory of Open Access Journals (Sweden)

    Mohamed Abd El-latif

    2012-01-01

    Full Text Available In this paper the encoder design of two parallel concatenated convolutional codes (PCCC have been introduced. Concept of puncturing is also considered. PCCC is also named as Turbo codes. Decoding process of turbo-codes using a maximum a posteriori (MAP algorithm has been discussed. Different parameters affect the BER performance of turbo codes are introduced .The previous studies focusing on the turbo-codes performance in (AWGN and Rayleigh multipath- fading channels. The real importance of Nakagami –m fading model lies in the fact that it can often be used to fit the indoor channel measurements for digital cellular systems such as global system mobile (GSM. In this paper, the BER performance and comparative study of turbo-codes in Nakagami multipath- fading channel is verified using Matlab simulation program.

  12. Kinetic Energy of Hydrocarbons as a Function of Electron Density and Convolutional Neural Networks.

    Science.gov (United States)

    Yao, Kun; Parkhill, John

    2016-03-01

    We demonstrate a convolutional neural network trained to reproduce the Kohn-Sham kinetic energy of hydrocarbons from an input electron density. The output of the network is used as a nonlocal correction to conventional local and semilocal kinetic functionals. We show that this approximation qualitatively reproduces Kohn-Sham potential energy surfaces when used with conventional exchange correlation functionals. The density which minimizes the total energy given by the functional is examined in detail. We identify several avenues to improve on this exploratory work, by reducing numerical noise and changing the structure of our functional. Finally we examine the features in the density learned by the neural network to anticipate the prospects of generalizing these models.

  13. Segmenting delaminations in carbon fiber reinforced polymer composite CT using convolutional neural networks

    Science.gov (United States)

    Sammons, Daniel; Winfree, William P.; Burke, Eric; Ji, Shuiwang

    2016-02-01

    Nondestructive evaluation (NDE) utilizes a variety of techniques to inspect various materials for defects without causing changes to the material. X-ray computed tomography (CT) produces large volumes of three dimensional image data. Using the task of identifying delaminations in carbon fiber reinforced polymer (CFRP) composite CT, this work shows that it is possible to automate the analysis of these large volumes of CT data using a machine learning model known as a convolutional neural network (CNN). Further, tests on simulated data sets show that with a robust set of experimental data, it may be possible to go beyond just identification and instead accurately characterize the size and shape of the delaminations with CNNs.

  14. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    Science.gov (United States)

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  15. Imaging in scattering media using correlation image sensors and sparse convolutional coding

    KAUST Repository

    Heide, Felix

    2014-10-17

    Correlation image sensors have recently become popular low-cost devices for time-of-flight, or range cameras. They usually operate under the assumption of a single light path contributing to each pixel. We show that a more thorough analysis of the sensor data from correlation sensors can be used can be used to analyze the light transport in much more complex environments, including applications for imaging through scattering and turbid media. The key of our method is a new convolutional sparse coding approach for recovering transient (light-in-flight) images from correlation image sensors. This approach is enabled by an analysis of sparsity in complex transient images, and the derivation of a new physically-motivated model for transient images with drastically improved sparsity.

  16. Drogue detection for autonomous aerial refueling based on convolutional neural networks

    Directory of Open Access Journals (Sweden)

    Xufeng Wang

    2017-02-01

    Full Text Available Drogue detection is a fundamental issue during the close docking phase of autonomous aerial refueling (AAR. To cope with this issue, a novel and effective method based on deep learning with convolutional neural networks (CNNs is proposed. In order to ensure its robustness and wide application, a deep learning dataset of images was prepared by utilizing real data of “Probe and Drogue” aerial refueling, which contains diverse drogues in various environmental conditions without artificial features placed on the drogues. By employing deep learning ideas and graphics processing units (GPUs, a model for drogue detection using a Caffe deep learning framework with CNNs was designed to ensure the method’s accuracy and real-time performance. Experiments were conducted to demonstrate the effectiveness of the proposed method, and results based on real AAR data compare its performance to other methods, validating the accuracy, speed, and robustness of its drogue detection ability.

  17. Melanoma detection by analysis of clinical images using convolutional neural network.

    Science.gov (United States)

    Nasr-Esfahani, E; Samavi, S; Karimi, N; Soroushmehr, S M R; Jafari, M H; Ward, K; Najarian, K

    2016-08-01

    Melanoma, most threatening type of skin cancer, is on the rise. In this paper an implementation of a deep-learning system on a computer server, equipped with graphic processing unit (GPU), is proposed for detection of melanoma lesions. Clinical (non-dermoscopic) images are used in the proposed system, which could assist a dermatologist in early diagnosis of this type of skin cancer. In the proposed system, input clinical images, which could contain illumination and noise effects, are preprocessed in order to reduce such artifacts. Afterward, the enhanced images are fed to a pre-trained convolutional neural network (CNN) which is a member of deep learning models. The CNN classifier, which is trained by large number of training samples, distinguishes between melanoma and benign cases. Experimental results show that the proposed method is superior in terms of diagnostic accuracy in comparison with the state-of-the-art methods.

  18. A General Rate K/N Convolutional Decoder Based on Neural Networks with Stopping Criterion

    Directory of Open Access Journals (Sweden)

    Johnny W. H. Kao

    2009-01-01

    Full Text Available A novel algorithm for decoding a general rate K/N convolutional code based on recurrent neural network (RNN is described and analysed. The algorithm is introduced by outlining the mathematical models of the encoder and decoder. A number of strategies for optimising the iterative decoding process are proposed, and a simulator was also designed in order to compare the Bit Error Rate (BER performance of the RNN decoder with the conventional decoder that is based on Viterbi Algorithm (VA. The simulation results show that this novel algorithm can achieve the same bit error rate and has a lower decoding complexity. Most importantly this algorithm allows parallel signal processing, which increases the decoding speed and accommodates higher data rate transmission. These characteristics are inherited from a neural network structure of the decoder and the iterative nature of the algorithm, that outperform the conventional VA algorithm.

  19. Longitudinal analysis of discussion topics in an online breast cancer community using convolutional neural networks.

    Science.gov (United States)

    Zhang, Shaodian; Grave, Edouard; Sklar, Elizabeth; Elhadad, Noémie

    2017-05-01

    Identifying topics of discussions in online health communities (OHC) is critical to various information extraction applications, but can be difficult because topics of OHC content are usually heterogeneous and domain-dependent. In this paper, we provide a multi-class schema, an annotated dataset, and supervised classifiers based on convolutional neural network (CNN) and other models for the task of classifying discussion topics. We apply the CNN classifier to the most popular breast cancer online community, and carry out cross-sectional and longitudinal analyses to show topic distributions and topic dynamics throughout members' participation. Our experimental results suggest that CNN outperforms other classifiers in the task of topic classification and identify several patterns and trajectories. For example, although members discuss mainly disease-related topics, their interest may change through time and vary with their disease severities. Copyright © 2017. Published by Elsevier Inc.

  20. 基于双层卷积神经网络的步态识别算法%Gait recognition based on double-layer convolutional neural networks

    Institute of Scientific and Technical Information of China (English)

    王欣; 唐俊; 王年

    2015-01-01

    This paper proposed an algorithm of gait recognition using double‐layer convolutional neural networks ( D‐CNN ) and plantar pressure image . Firstly , the preprocessing of the evaluated images from the plantar pressure test system was implemented .Secondly ,convolution features were learned from single and double layer of convolutional neural network model . Finally ,convolution features were used to train the SVM classifiers and obtain the classification results .The experimental results demonstrated the effectiveness of the proposed method .%提出运用双层卷积神经网络模型实现基于足底压力图像的步态识别方法。首先,对足底压力数据采集系统采集的图像作相应预处理;然后,用双层卷积神经网络模型学习得到足底压力图像的单层和双层卷积特征;最后,将卷积特征训练分类器得到分类结果。实验结果验证了该算法的有效性。

  1. Scattering correction based on regularization de-convolution for Cone-Beam CT

    CERN Document Server

    Xie, Shi-peng

    2016-01-01

    In Cone-Beam CT (CBCT) imaging systems, the scattering phenomenon has a significant impact on the reconstructed image and is a long-lasting research topic on CBCT. In this paper, we propose a simple, novel and fast approach for mitigating scatter artifacts and increasing the image contrast in CBCT, belonging to the category of convolution-based method in which the projected data is de-convolved with a convolution kernel. A key step in this method is how to determine the convolution kernel. Compared with existing methods, the estimation of convolution kernel is based on bi-l1-l2-norm regularization imposed on both the intermediate the known scatter contaminated projection images and the convolution kernel. Our approach can reduce the scatter artifacts from 12.930 to 2.133.

  2. The effect of whitening transformation on pooling operations in convolutional autoencoders

    Science.gov (United States)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua

    2015-12-01

    Convolutional autoencoders (CAEs) are unsupervised feature extractors for high-resolution images. In the pre-processing step, whitening transformation has widely been adopted to remove redundancy by making adjacent pixels less correlated. Pooling is a biologically inspired operation to reduce the resolution of feature maps and achieve spatial invariance in convolutional neural networks. Conventionally, pooling methods are mainly determined empirically in most previous work. Therefore, our main purpose is to study the relationship between whitening processing and pooling operations in convolutional autoencoders for image classification. We propose an adaptive pooling approach based on the concepts of information entropy to test the effect of whitening on pooling in different conditions. Experimental results on benchmark datasets indicate that the performance of pooling strategies is associated with the distribution of feature activations, which can be affected by whitening processing. This provides guidance for the selection of pooling methods in convolutional autoencoders and other convolutional neural networks.

  3. Accelerated SPECT Monte Carlo Simulation Using Multiple Projection Sampling and Convolution-Based Forced Detection

    Science.gov (United States)

    Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.

    2010-01-01

    Monte Carlo (MC) is a well-utilized tool for simulating photon transport in single photon emission computed tomography (SPECT) due to its ability to accurately model physical processes of photon transport. As a consequence of this accuracy, it suffers from a relatively low detection efficiency and long computation time. One technique used to improve the speed of MC modeling is the effective and well-established variance reduction technique (VRT) known as forced detection (FD). With this method, photons are followed as they traverse the object under study but are then forced to travel in the direction of the detector surface, whereby they are detected at a single detector location. Another method, called convolution-based forced detection (CFD), is based on the fundamental idea of FD with the exception that detected photons are detected at multiple detector locations and determined with a distance-dependent blurring kernel. In order to further increase the speed of MC, a method named multiple projection convolution-based forced detection (MP-CFD) is presented. Rather than forcing photons to hit a single detector, the MP-CFD method follows the photon transport through the object but then, at each scatter site, forces the photon to interact with a number of detectors at a variety of angles surrounding the object. This way, it is possible to simulate all the projection images of a SPECT simulation in parallel, rather than as independent projections. The result of this is vastly improved simulation time as much of the computation load of simulating photon transport through the object is done only once for all projection angles. The results of the proposed MP-CFD method agrees well with the experimental data in measurements of point spread function (PSF), producing a correlation coefficient (r2) of 0.99 compared to experimental data. The speed of MP-CFD is shown to be about 60 times faster than a regular forced detection MC program with similar results. PMID:20811587

  4. Deep convolutional neural networks for multi-modality isointense infant brain image segmentation.

    Science.gov (United States)

    Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang

    2015-03-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6-8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multi-modality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Introducing single-crystal scattering and optical potentials into MCNPX: Predicting neutron emission from a convoluted moderator

    Science.gov (United States)

    Gallmeier, F. X.; Iverson, E. B.; Lu, W.; Baxter, D. V.; Muhrer, G.; Ansell, S.

    2016-04-01

    Neutron transport simulation codes are indispensable tools for the design and construction of modern neutron scattering facilities and instrumentation. Recently, it has become increasingly clear that some neutron instrumentation has started to exploit physics that is not well-modeled by the existing codes. In particular, the transport of neutrons through single crystals and across interfaces in MCNP(X), Geant4, and other codes ignores scattering from oriented crystals and refractive effects, and yet these are essential phenomena for the performance of monochromators and ultra-cold neutron transport respectively (to mention but two examples). In light of these developments, we have extended the MCNPX code to include a single-crystal neutron scattering model and neutron reflection/refraction physics. We have also generated silicon scattering kernels for single crystals of definable orientation. As a first test of these new tools, we have chosen to model the recently developed convoluted moderator concept, in which a moderating material is interleaved with layers of perfect crystals to provide an exit path for neutrons moderated to energies below the crystal's Bragg cut-off from locations deep within the moderator. Studies of simple cylindrical convoluted moderator systems of 100 mm diameter and composed of polyethylene and single crystal silicon were performed with the upgraded MCNPX code and reproduced the magnitude of effects seen in experiments compared to homogeneous moderator systems. Applying different material properties for refraction and reflection, and by replacing the silicon in the models with voids, we show that the emission enhancements seen in recent experiments are primarily caused by the transparency of the silicon and void layers. Finally we simulated the convoluted moderator experiments described by Iverson et al. and found satisfactory agreement between the measurements and the simulations performed with the tools we have developed.

  6. Introducing single-crystal scattering and optical potentials into MCNPX: Predicting neutron emission from a convoluted moderator

    Energy Technology Data Exchange (ETDEWEB)

    Gallmeier, F.X., E-mail: gallmeierfz@ornl.gov [Spallation Neutron Source, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Iverson, E.B.; Lu, W. [Spallation Neutron Source, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Baxter, D.V. [Center for the Exploration of Energy and Matter, Indiana University, Bloomington, IN 47408 (United States); Muhrer, G.; Ansell, S. [European Spallation Source, ESS AB, Lund (Sweden)

    2016-04-01

    Neutron transport simulation codes are indispensable tools for the design and construction of modern neutron scattering facilities and instrumentation. Recently, it has become increasingly clear that some neutron instrumentation has started to exploit physics that is not well-modeled by the existing codes. In particular, the transport of neutrons through single crystals and across interfaces in MCNP(X), Geant4, and other codes ignores scattering from oriented crystals and refractive effects, and yet these are essential phenomena for the performance of monochromators and ultra-cold neutron transport respectively (to mention but two examples). In light of these developments, we have extended the MCNPX code to include a single-crystal neutron scattering model and neutron reflection/refraction physics. We have also generated silicon scattering kernels for single crystals of definable orientation. As a first test of these new tools, we have chosen to model the recently developed convoluted moderator concept, in which a moderating material is interleaved with layers of perfect crystals to provide an exit path for neutrons moderated to energies below the crystal's Bragg cut–off from locations deep within the moderator. Studies of simple cylindrical convoluted moderator systems of 100 mm diameter and composed of polyethylene and single crystal silicon were performed with the upgraded MCNPX code and reproduced the magnitude of effects seen in experiments compared to homogeneous moderator systems. Applying different material properties for refraction and reflection, and by replacing the silicon in the models with voids, we show that the emission enhancements seen in recent experiments are primarily caused by the transparency of the silicon and void layers. Finally we simulated the convoluted moderator experiments described by Iverson et al. and found satisfactory agreement between the measurements and the simulations performed with the tools we have developed.

  7. Drug-Drug Interaction Extraction via Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Shengyu Liu

    2016-01-01

    Full Text Available Drug-drug interaction (DDI extraction as a typical relation extraction task in natural language processing (NLP has always attracted great attention. Most state-of-the-art DDI extraction systems are based on support vector machines (SVM with a large number of manually defined features. Recently, convolutional neural networks (CNN, a robust machine learning method which almost does not need manually defined features, has exhibited great potential for many NLP tasks. It is worth employing CNN for DDI extraction, which has never been investigated. We proposed a CNN-based method for DDI extraction. Experiments conducted on the 2013 DDIExtraction challenge corpus demonstrate that CNN is a good choice for DDI extraction. The CNN-based DDI extraction method achieves an F-score of 69.75%, which outperforms the existing best performing method by 2.75%.

  8. Online multipath convolutional coding for real-time transmission

    CERN Document Server

    Thai, Tuan Tran; Lacan, Jerome

    2012-01-01

    Most of multipath multimedia streaming proposals use Forward Error Correction (FEC) approach to protect from packet losses. However, FEC does not sustain well burst of losses even when packets from a given FEC block are spread over multiple paths. In this article, we propose an online multipath convolutional coding for real-time multipath streaming based on an on-the-fly coding scheme called Tetrys. We evaluate the benefits brought out by this coding scheme inside an existing FEC multipath load splitting proposal known as Encoded Multipath Streaming (EMS). We demonstrate that Tetrys consistently outperforms FEC in both uniform and burst losses with EMS scheme. We also propose a modification of the standard EMS algorithm that greatly improves the performance in terms of packet recovery. Finally, we analyze different spreading policies of the Tetrys redundancy traffic between available paths and observe that the longer propagation delay path should be preferably used to carry repair packets.

  9. Drug-Drug Interaction Extraction via Convolutional Neural Networks.

    Science.gov (United States)

    Liu, Shengyu; Tang, Buzhou; Chen, Qingcai; Wang, Xiaolong

    2016-01-01

    Drug-drug interaction (DDI) extraction as a typical relation extraction task in natural language processing (NLP) has always attracted great attention. Most state-of-the-art DDI extraction systems are based on support vector machines (SVM) with a large number of manually defined features. Recently, convolutional neural networks (CNN), a robust machine learning method which almost does not need manually defined features, has exhibited great potential for many NLP tasks. It is worth employing CNN for DDI extraction, which has never been investigated. We proposed a CNN-based method for DDI extraction. Experiments conducted on the 2013 DDIExtraction challenge corpus demonstrate that CNN is a good choice for DDI extraction. The CNN-based DDI extraction method achieves an F-score of 69.75%, which outperforms the existing best performing method by 2.75%.

  10. Convolutional Neural Networks Applied to House Numbers Digit Classification

    CERN Document Server

    Sermanet, Pierre; LeCun, Yann

    2012-01-01

    We classify digits of real-world house numbers using convolutional neural networks (ConvNets). ConvNets are hierarchical feature learning neural networks whose structure is biologically inspired. Unlike many popular vision approaches that are hand-designed, ConvNets can automatically learn a unique set of features optimized for a given task. We augmented the traditional ConvNet architecture by learning multi-stage features and by using Lp pooling and establish a new state-of-the-art of 94.85% accuracy on the SVHN dataset (45.2% error improvement). Furthermore, we analyze the benefits of different pooling methods and multi-stage features in ConvNets. The source code and a tutorial are available at eblearn.sf.net.

  11. INVARIANT DESCRIPTOR LEARNING USING A SIAMESE CONVOLUTIONAL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    L. Chen

    2016-06-01

    Full Text Available In this paper we describe learning of a descriptor based on the Siamese Convolutional Neural Network (CNN architecture and evaluate our results on a standard patch comparison dataset. The descriptor learning architecture is composed of an input module, a Siamese CNN descriptor module and a cost computation module that is based on the L2 Norm. The cost function we use pulls the descriptors of matching patches close to each other in feature space while pushing the descriptors for non-matching pairs away from each other. Compared to related work, we optimize the training parameters by combining a moving average strategy for gradients and Nesterov's Accelerated Gradient. Experiments show that our learned descriptor reaches a good performance and achieves state-of-art results in terms of the false positive rate at a 95 % recall rate on standard benchmark datasets.

  12. Hybrid Algorithm for the Optimization of Training Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Hayder M. Albeahdili

    2015-10-01

    Full Text Available The training optimization processes and efficient fast classification are vital elements in the development of a convolution neural network (CNN. Although stochastic gradient descend (SGD is a Prevalence algorithm used by many researchers for the optimization of training CNNs, it has vast limitations. In this paper, it is endeavor to diminish and tackle drawbacks inherited from SGD by proposing an alternate algorithm for CNN training optimization. A hybrid of genetic algorithm (GA and particle swarm optimization (PSO is deployed in this work. In addition to SGD, PSO and genetic algorithm (PSO-GA are also incorporated as a combined and efficient mechanism in achieving non trivial solutions. The proposed unified method achieves state-of-the-art classification results on the different challenge benchmark datasets such as MNIST, CIFAR-10, and SVHN. Experimental results showed that the results outperform and achieve superior results to most contemporary approaches.

  13. Facial Expression Recognition Using 3D Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Young-Hyen Byeon

    2014-12-01

    Full Text Available This paper is concerned with video-based facial expression recognition frequently used in conjunction with HRI (Human-Robot Interaction that can naturally interact between human and robot. For this purpose, we design a 3D-CNN(3D Convolutional Neural Networks by augmenting dimensionality reduction methods such as PCA(Principal Component Analysis and TMPCA(Tensor-based Multilinear Principal Component Analysis to recognize simultaneously the successive frames with facial expression images obtained through video camera. The 3D-CNN can achieve some degree of shift and deformation invariance using local receptive fields and spatial subsampling through dimensionality reduction of redundant CNN’s output. The experimental results on video-based facial expression database reveal that the presented method shows a good performance in comparison to the conventional methods such as PCA and TMPCA.

  14. Fast convolution with free-space Green's functions

    Science.gov (United States)

    Vico, Felipe; Greengard, Leslie; Ferrando, Miguel

    2016-10-01

    We introduce a fast algorithm for computing volume potentials - that is, the convolution of a translation invariant, free-space Green's function with a compactly supported source distribution defined on a uniform grid. The algorithm relies on regularizing the Fourier transform of the Green's function by cutting off the interaction in physical space beyond the domain of interest. This permits the straightforward application of trapezoidal quadrature and the standard FFT, with superalgebraic convergence for smooth data. Moreover, the method can be interpreted as employing a Nystrom discretization of the corresponding integral operator, with matrix entries which can be obtained explicitly and rapidly. This is of use in the design of preconditioners or fast direct solvers for a variety of volume integral equations. The method proposed permits the computation of any derivative of the potential, at the cost of an additional FFT.

  15. Structured learning via convolutional neural networks for vehicle detection

    Science.gov (United States)

    Maqueda, Ana I.; del Blanco, Carlos R.; Jaureguizar, Fernando; García, Narciso

    2017-05-01

    One of the main tasks in a vision-based traffic monitoring system is the detection of vehicles. Recently, deep neural networks have been successfully applied to this end, outperforming previous approaches. However, most of these works generally rely on complex and high-computational region proposal networks. Others employ deep neural networks as a segmentation strategy to achieve a semantic representation of the object of interest, which has to be up-sampled later. In this paper, a new design for a convolutional neural network is applied to vehicle detection in highways for traffic monitoring. This network generates a spatially structured output that encodes the vehicle locations. Promising results have been obtained in the GRAM-RTM dataset.

  16. SOME ASYMPTOTIC PROPERTIES OF THE CONVOLUTION TRANSFORMS OF FRACTAL MEASURES

    Institute of Scientific and Technical Information of China (English)

    Cao Li

    2012-01-01

    We study the asymptotic behavior near the boundary of u(x,y) =Ky * μ (x),defined on the half-space R+ × RN by the convolution of an approximate identity Ky (.) (y >0) and a measure μ on RN.The Poisson and the heat kernel are unified as special cases in our setting.We are mainly interested in the relationship between the rate of growth at boundary of u and the s-density of a singular measure μ.Then a boundary limit theorem of Fatou's type for singular measures is proved.Meanwhile,the asymptotic behavior of a quotient of Kμ and Kv is also studied,then the corresponding Fatou-Doob's boundary relative limit is obtained.In particular,some results about the singular boundary behavior of harmonic and heat functions can be deduced simultaneously from ours.At the end,an application in fractal geometry is given.

  17. Plane-wave decomposition by spherical-convolution microphone array

    Science.gov (United States)

    Rafaely, Boaz; Park, Munhum

    2001-05-01

    Reverberant sound fields are widely studied, as they have a significant influence on the acoustic performance of enclosures in a variety of applications. For example, the intelligibility of speech in lecture rooms, the quality of music in auditoria, the noise level in offices, and the production of 3D sound in living rooms are all affected by the enclosed sound field. These sound fields are typically studied through frequency response measurements or statistical measures such as reverberation time, which do not provide detailed spatial information. The aim of the work presented in this seminar is the detailed analysis of reverberant sound fields. A measurement and analysis system based on acoustic theory and signal processing, designed around a spherical microphone array, is presented. Detailed analysis is achieved by decomposition of the sound field into waves, using spherical Fourier transform and spherical convolution. The presentation will include theoretical review, simulation studies, and initial experimental results.

  18. Convolution Equivalent L\\'evy Processes and First Passage Times

    CERN Document Server

    Griffin, Philip S

    2012-01-01

    We investigate the behaviour of L\\'{e}vy processes with convolution equivalent L\\'evy measures, up to the time of first passage over a high level $u$. Such problems arise naturally in the context of insurance risk where $u$ is the initial reserve. We obtain a precise asymptotic estimate on the probability of first passage occurring by time $T$. This result is then used to study the process conditioned on first passage by time $T$. The existence of a limiting process as $u\\to \\infty$ is demonstrated, which leads to precise estimates for the probability of other events relating to first passage, such as the overshoot. A discussion of these results, as they relate to insurance risk, is also given.

  19. Star-galaxy classification using deep convolutional neural networks

    Science.gov (United States)

    Kim, Edward J.; Brunner, Robert J.

    2017-02-01

    Most existing star-galaxy classifiers use the reduced summary information from catalogues, requiring careful feature extraction and selection. The latest advances in machine learning that use deep convolutional neural networks (ConvNets) allow a machine to automatically learn the features directly from the data, minimizing the need for input from human experts. We present a star-galaxy classification framework that uses deep ConvNets directly on the reduced, calibrated pixel values. Using data from the Sloan Digital Sky Survey and the Canada-France-Hawaii Telescope Lensing Survey, we demonstrate that ConvNets are able to produce accurate and well-calibrated probabilistic classifications that are competitive with conventional machine learning techniques. Future advances in deep learning may bring more success with current and forthcoming photometric surveys, such as the Dark Energy Survey and the Large Synoptic Survey Telescope, because deep neural networks require very little, manual feature engineering.

  20. Invariant Descriptor Learning Using a Siamese Convolutional Neural Network

    Science.gov (United States)

    Chen, L.; Rottensteiner, F.; Heipke, C.

    2016-06-01

    In this paper we describe learning of a descriptor based on the Siamese Convolutional Neural Network (CNN) architecture and evaluate our results on a standard patch comparison dataset. The descriptor learning architecture is composed of an input module, a Siamese CNN descriptor module and a cost computation module that is based on the L2 Norm. The cost function we use pulls the descriptors of matching patches close to each other in feature space while pushing the descriptors for non-matching pairs away from each other. Compared to related work, we optimize the training parameters by combining a moving average strategy for gradients and Nesterov's Accelerated Gradient. Experiments show that our learned descriptor reaches a good performance and achieves state-of-art results in terms of the false positive rate at a 95 % recall rate on standard benchmark datasets.

  1. Exploiting Narrowband Efficiency for Broadband Convolutive Blind Source Separation

    Directory of Open Access Journals (Sweden)

    Aichner Robert

    2007-01-01

    Full Text Available Based on a recently presented generic broadband blind source separation (BSS algorithm for convolutive mixtures, we propose in this paper a novel algorithm combining advantages of broadband algorithms with the computational efficiency of narrowband techniques. By selective application of the Szegö theorem which relates properties of Toeplitz and circulant matrices, a new normalization is derived as a special case of the generic broadband algorithm. This results in a computationally efficient and fast converging algorithm without introducing typical narrowband problems such as the internal permutation problem or circularity effects. Moreover, a novel regularization method for the generic broadband algorithm is proposed and subsequently also derived for the proposed algorithm. Experimental results in realistic acoustic environments show improved performance of the novel algorithm compared to previous approximations.

  2. Training strategy for convolutional neural networks in pedestrian gender classification

    Science.gov (United States)

    Ng, Choon-Boon; Tay, Yong-Haur; Goi, Bok-Min

    2017-06-01

    In this work, we studied a strategy for training a convolutional neural network in pedestrian gender classification with limited amount of labeled training data. Unsupervised learning by k-means clustering on pedestrian images was used to learn the filters to initialize the first layer of the network. As a form of pre-training, supervised learning for the related task of pedestrian classification was performed. Finally, the network was fine-tuned for gender classification. We found that this strategy improved the network's generalization ability in gender classification, achieving better test results when compared to random weights initialization and slightly more beneficial than merely initializing the first layer filters by unsupervised learning. This shows that unsupervised learning followed by pre-training with pedestrian images is an effective strategy to learn useful features for pedestrian gender classification.

  3. XOR-FREE Implementation of Convolutional Encoder for Reconfigurable Hardware

    Directory of Open Access Journals (Sweden)

    Gaurav Purohit

    2016-01-01

    Full Text Available This paper presents a novel XOR-FREE algorithm to implement the convolutional encoder using reconfigurable hardware. The approach completely removes the XOR processing of a chosen nonsystematic, feedforward generator polynomial of larger constraint length. The hardware (HW implementation of new architecture uses Lookup Table (LUT for storing the parity bits. The design implements architectural reconfigurability by modifying the generator polynomial of the same constraint length and code rate to reduce the design complexity. The proposed architecture reduces the dynamic power up to 30% and improves the hardware cost and propagation delay up to 20% and 32%, respectively. The performance of the proposed architecture is validated in MATLAB Simulink and tested on Zynq-7 series FPGA.

  4. Forward Error Correction Convolutional Codes for RTAs' Networks: An Overview

    Directory of Open Access Journals (Sweden)

    Salehe I. Mrutu

    2014-06-01

    Full Text Available For more than half a century, Forward Error Correction Convolutional Codes (FEC-CC have been in use to provide reliable data communication over various communication networks. The recent high increase of mobile communication services that require both bandwidth intensive and interactive Real Time Applications (RTAs impose an increased demand for fast and reliable wireless communication networks. Transmission burst errors; data decoding complexity and jitter are identified as key factors influencing the quality of service of RTAs implementation over wireless transmission media. This paper reviews FEC-CC as one of the most commonly used algorithm in Forward Error Correction for the purpose of improving its operational performance. Under this category, we have analyzed various previous works for their strengths and weaknesses in decoding FEC-CC. A comparison of various decoding algorithms is made based on their decoding computational complexity.

  5. Radio Frequency Interference mitigation using deep convolutional neural networks

    CERN Document Server

    Akeret, Joel; Lucchi, Aurelien; Refregier, Alexandre

    2016-01-01

    We propose a novel approach for mitigating radio frequency interference (RFI) signals in radio data using the latest advances in deep learning. We employ a special type of Convolutional Neural Network, the U-Net, that enables the classification of clean signal and RFI signatures in 2D time-ordered data acquired from a radio telescope. We train and assess the performance of this network using the HIDE & SEEK radio data simulation and processing packages, as well as data collected at the Bleien Observatory. We find that our U-Net implementation can outperform classical RFI mitigation algorithms such as SEEK's SumThreshold implementation. We publish our U-Net software package on GitHub under GPLv3 license.

  6. Radio frequency interference mitigation using deep convolutional neural networks

    Science.gov (United States)

    Akeret, J.; Chang, C.; Lucchi, A.; Refregier, A.

    2017-01-01

    We propose a novel approach for mitigating radio frequency interference (RFI) signals in radio data using the latest advances in deep learning. We employ a special type of Convolutional Neural Network, the U-Net, that enables the classification of clean signal and RFI signatures in 2D time-ordered data acquired from a radio telescope. We train and assess the performance of this network using the HIDE &SEEK radio data simulation and processing packages, as well as early Science Verification data acquired with the 7m single-dish telescope at the Bleien Observatory. We find that our U-Net implementation is showing competitive accuracy to classical RFI mitigation algorithms such as SEEK's SUMTHRESHOLD implementation. We publish our U-Net software package on GitHub under GPLv3 license.

  7. Interleaved Convolutional Code and Its Viterbi Decoder Architecture

    Directory of Open Access Journals (Sweden)

    Jun Jin Kong

    2003-12-01

    Full Text Available We propose an area-efficient high-speed interleaved Viterbi decoder architecture, which is based on the state-parallel architecture with register exchange path memory structure, for interleaved convolutional code. The state-parallel architecture uses as many add-compare-select (ACS units as the number of trellis states. By replacing each delay (or storage element in state metrics memory (or path metrics memory and path memory (or survival memory with I delays, interleaved Viterbi decoder is obtained where I is the interleaving degree. The decoding speed of this decoder architecture is as fast as the operating clock speed. The latency of proposed interleaved Viterbi decoder is “decoding depth (DD × interleaving degree (I+ extra delays (A,” which increases linearly with the interleaving degree I.

  8. Rapid Exact Signal Scanning With Deep Convolutional Neural Networks

    Science.gov (United States)

    Thom, Markus; Gritschneder, Franz

    2017-03-01

    A rigorous formulation of the dynamics of a signal processing scheme aimed at dense signal scanning without any loss in accuracy is introduced and analyzed. Related methods proposed in the recent past lack a satisfactory analysis of whether they actually fulfill any exactness constraints. This is improved through an exact characterization of the requirements for a sound sliding window approach. The tools developed in this paper are especially beneficial if Convolutional Neural Networks are employed, but can also be used as a more general framework to validate related approaches to signal scanning. The proposed theory helps to eliminate redundant computations and renders special case treatment unnecessary, resulting in a dramatic boost in efficiency particularly on massively parallel processors. This is demonstrated both theoretically in a computational complexity analysis and empirically on modern parallel processors.

  9. Convolutional Neural Networks for patient-specific ECG classification.

    Science.gov (United States)

    Kiranyaz, Serkan; Ince, Turker; Hamila, Ridha; Gabbouj, Moncef

    2015-01-01

    We propose a fast and accurate patient-specific electrocardiogram (ECG) classification and monitoring system using an adaptive implementation of 1D Convolutional Neural Networks (CNNs) that can fuse feature extraction and classification into a unified learner. In this way, a dedicated CNN will be trained for each patient by using relatively small common and patient-specific training data and thus it can also be used to classify long ECG records such as Holter registers in a fast and accurate manner. Alternatively, such a solution can conveniently be used for real-time ECG monitoring and early alert system on a light-weight wearable device. The experimental results demonstrate that the proposed system achieves a superior classification performance for the detection of ventricular ectopic beats (VEB) and supraventricular ectopic beats (SVEB).

  10. Plane-wave decomposition by spherical-convolution microphone array

    Science.gov (United States)

    Rafaely, Boaz; Park, Munhum

    2004-05-01

    Reverberant sound fields are widely studied, as they have a significant influence on the acoustic performance of enclosures in a variety of applications. For example, the intelligibility of speech in lecture rooms, the quality of music in auditoria, the noise level in offices, and the production of 3D sound in living rooms are all affected by the enclosed sound field. These sound fields are typically studied through frequency response measurements or statistical measures such as reverberation time, which do not provide detailed spatial information. The aim of the work presented in this seminar is the detailed analysis of reverberant sound fields. A measurement and analysis system based on acoustic theory and signal processing, designed around a spherical microphone array, is presented. Detailed analysis is achieved by decomposition of the sound field into waves, using spherical Fourier transform and spherical convolution. The presentation will include theoretical review, simulation studies, and initial experimental results.

  11. Enhanced Line Integral Convolution with Flow Feature Detection

    Science.gov (United States)

    Lane, David; Okada, Arthur

    1996-01-01

    The Line Integral Convolution (LIC) method, which blurs white noise textures along a vector field, is an effective way to visualize overall flow patterns in a 2D domain. The method produces a flow texture image based on the input velocity field defined in the domain. Because of the nature of the algorithm, the texture image tends to be blurry. This sometimes makes it difficult to identify boundaries where flow separation and reattachments occur. We present techniques to enhance LIC texture images and use colored texture images to highlight flow separation and reattachment boundaries. Our techniques have been applied to several flow fields defined in 3D curvilinear multi-block grids and scientists have found the results to be very useful.

  12. Plant species classification using deep convolutional neural network

    DEFF Research Database (Denmark)

    Dyrmann, Mads; Karstoft, Henrik; Midtiby, Henrik Skov

    2016-01-01

    Information on which weed species are present within agricultural fields is important for site specific weed management. This paper presents a method that is capable of recognising plant species in colour images by using a convolutional neural network. The network is built from scratch trained...... and tested on a total of 10,413 images containing 22 weed and crop species at early growth stages. These images originate from six different data sets, which have variations with respect to lighting, resolution, and soil type. This includes images taken under controlled conditions with regard to camera...... stabilisation and illumination, and images shot with hand-held mobile phones in fields with changing lighting conditions and different soil types. For these 22 species, the network is able to achieve a classification accuracy of 86.2%....

  13. Robust Fusion of Irregularly Sampled Data Using Adaptive Normalized Convolution

    Directory of Open Access Journals (Sweden)

    Schutte Klamer

    2006-01-01

    Full Text Available We present a novel algorithm for image fusion from irregularly sampled data. The method is based on the framework of normalized convolution (NC, in which the local signal is approximated through a projection onto a subspace. The use of polynomial basis functions in this paper makes NC equivalent to a local Taylor series expansion. Unlike the traditional framework, however, the window function of adaptive NC is adapted to local linear structures. This leads to more samples of the same modality being gathered for the analysis, which in turn improves signal-to-noise ratio and reduces diffusion across discontinuities. A robust signal certainty is also adapted to the sample intensities to minimize the influence of outliers. Excellent fusion capability of adaptive NC is demonstrated through an application of super-resolution image reconstruction.

  14. Predicting Response to Neoadjuvant Chemotherapy with PET Imaging Using Convolutional Neural Networks.

    Directory of Open Access Journals (Sweden)

    Petros-Pavlos Ypsilantis

    Full Text Available Imaging of cancer with 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET has become a standard component of diagnosis and staging in oncology, and is becoming more important as a quantitative monitor of individual response to therapy. In this article we investigate the challenging problem of predicting a patient's response to neoadjuvant chemotherapy from a single 18F-FDG PET scan taken prior to treatment. We take a "radiomics" approach whereby a large amount of quantitative features is automatically extracted from pretherapy PET images in order to build a comprehensive quantification of the tumor phenotype. While the dominant methodology relies on hand-crafted texture features, we explore the potential of automatically learning low- to high-level features directly from PET scans. We report on a study that compares the performance of two competing radiomics strategies: an approach based on state-of-the-art statistical classifiers using over 100 quantitative imaging descriptors, including texture features as well as standardized uptake values, and a convolutional neural network, 3S-CNN, trained directly from PET scans by taking sets of adjacent intra-tumor slices. Our experimental results, based on a sample of 107 patients with esophageal cancer, provide initial evidence that convolutional neural networks have the potential to extract PET imaging representations that are highly predictive of response to therapy. On this dataset, 3S-CNN achieves an average 80.7% sensitivity and 81.6% specificity in predicting non-responders, and outperforms other competing predictive models.

  15. Predicting Response to Neoadjuvant Chemotherapy with PET Imaging Using Convolutional Neural Networks.

    Science.gov (United States)

    Ypsilantis, Petros-Pavlos; Siddique, Musib; Sohn, Hyon-Mok; Davies, Andrew; Cook, Gary; Goh, Vicky; Montana, Giovanni

    2015-01-01

    Imaging of cancer with 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) has become a standard component of diagnosis and staging in oncology, and is becoming more important as a quantitative monitor of individual response to therapy. In this article we investigate the challenging problem of predicting a patient's response to neoadjuvant chemotherapy from a single 18F-FDG PET scan taken prior to treatment. We take a "radiomics" approach whereby a large amount of quantitative features is automatically extracted from pretherapy PET images in order to build a comprehensive quantification of the tumor phenotype. While the dominant methodology relies on hand-crafted texture features, we explore the potential of automatically learning low- to high-level features directly from PET scans. We report on a study that compares the performance of two competing radiomics strategies: an approach based on state-of-the-art statistical classifiers using over 100 quantitative imaging descriptors, including texture features as well as standardized uptake values, and a convolutional neural network, 3S-CNN, trained directly from PET scans by taking sets of adjacent intra-tumor slices. Our experimental results, based on a sample of 107 patients with esophageal cancer, provide initial evidence that convolutional neural networks have the potential to extract PET imaging representations that are highly predictive of response to therapy. On this dataset, 3S-CNN achieves an average 80.7% sensitivity and 81.6% specificity in predicting non-responders, and outperforms other competing predictive models.

  16. FULLY CONVOLUTIONAL NETWORKS FOR MULTI-MODALITY ISOINTENSE INFANT BRAIN IMAGE SEGMENTATION.

    Science.gov (United States)

    Nie, Dong; Wang, Li; Gao, Yaozong; Shen, Dinggang

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development. In the isointense phase (approximately 6-8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, resulting in extremely low tissue contrast and thus making the tissue segmentation very challenging. The existing methods for tissue segmentation in this isointense phase usually employ patch-based sparse labeling on single T1, T2 or fractional anisotropy (FA) modality or their simply-stacked combinations without fully exploring the multi-modality information. To address the challenge, in this paper, we propose to use fully convolutional networks (FCNs) for the segmentation of isointense phase brain MR images. Instead of simply stacking the three modalities, we train one network for each modality image, and then fuse their high-layer features together for final segmentation. Specifically, we conduct a convolution-pooling stream for multimodality information from T1, T2, and FA images separately, and then combine them in high-layer for finally generating the segmentation maps as the outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense phase brain images. Results showed that our proposed model significantly outperformed previous methods in terms of accuracy. In addition, our results also indicated a better way of integrating multi-modality images, which leads to performance improvement.

  17. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.

    Science.gov (United States)

    He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian

    2015-09-01

    Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224 × 224) input image. This requirement is "artificial" and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, "spatial pyramid pooling", to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102 × faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition.

  18. DanQ: a hybrid convolutional and recurrent deep neural network for quantifying the function of DNA sequences.

    Science.gov (United States)

    Quang, Daniel; Xie, Xiaohui

    2016-06-20

    Modeling the properties and functions of DNA sequences is an important, but challenging task in the broad field of genomics. This task is particularly difficult for non-coding DNA, the vast majority of which is still poorly understood in terms of function. A powerful predictive model for the function of non-coding DNA can have enormous benefit for both basic science and translational research because over 98% of the human genome is non-coding and 93% of disease-associated variants lie in these regions. To address this need, we propose DanQ, a novel hybrid convolutional and bi-directional long short-term memory recurrent neural network framework for predicting non-coding function de novo from sequence. In the DanQ model, the convolution layer captures regulatory motifs, while the recurrent layer captures long-term dependencies between the motifs in order to learn a regulatory 'grammar' to improve predictions. DanQ improves considerably upon other models across several metrics. For some regulatory markers, DanQ can achieve over a 50% relative improvement in the area under the precision-recall curve metric compared to related models. We have made the source code available at the github repository http://github.com/uci-cbcl/DanQ.

  19. An Unsplit Convolutional-Perfectly-Matched-Layer Based Boundary Formulation for the Stratified Linearized Ideal MHD equations

    CERN Document Server

    Hanasoge, S M; Gizon, L

    2010-01-01

    Perfectly matched layers are a very efficient and accurate way to absorb waves in media. We present a stable convolutional unsplit perfectly matched formulation designed for the linearized stratified Euler equations. However, the technique as applied to the Magneto-hydrodynamic (MHD) equations requires the use of a sponge, which, despite placing the perfectly matched status in question, is still highly efficient at absorbing outgoing waves. We study solutions of the equations in the backdrop of models of linearized wave propagation in the Sun. We test the numerical stability of the schemes by integrating the equations over a large number of wave periods.

  20. Convoluted cells as a marker for maternal cell contamination in CVS cultures

    DEFF Research Database (Denmark)

    Hertz, Jens Michael; Jensen, P K; Therkelsen, A J

    1987-01-01

    In order to identify cells of maternal origin in CVS cultures, tissue from 1st trimester abortions were cultivated and the cultures stained in situ for X-chromatin. Convoluted cells and maternal fibroblasts were found to be positive. By chromosome analysis of cultures from 105 diagnostic placenta...... biopsies, obtained by the transabdominal route, metaphases of maternal origin were found in nine cases. In eight of these cases colonies of convoluted cells were observed. We conclude that convoluted cells are of maternal origin and are a reliable marker for maternal cell contamination in CVS cultures....

  1. Scattering correction based on regularization de-convolution for Cone-Beam CT

    OpenAIRE

    Xie, Shi-peng; Yan, Rui-ju

    2016-01-01

    In Cone-Beam CT (CBCT) imaging systems, the scattering phenomenon has a significant impact on the reconstructed image and is a long-lasting research topic on CBCT. In this paper, we propose a simple, novel and fast approach for mitigating scatter artifacts and increasing the image contrast in CBCT, belonging to the category of convolution-based method in which the projected data is de-convolved with a convolution kernel. A key step in this method is how to determine the convolution kernel. Co...

  2. Asymptotic Distributions of the Overshoot and Undershoots for the L\\'evy Insurance Risk Process in the Cram\\'er and Convolution Equivalent Cases

    CERN Document Server

    Griffin, Philip S; van Schaik, Kees

    2011-01-01

    Recent models of the insurance risk process use a L\\'evy process to generalise the traditional Cram\\'er-Lundberg compound Poisson model. This paper is concerned with the behaviour of the distributions of the overshoot and undershoots of a high level, for a L\\'{e}vy process which drifts to $-\\infty$ and satisfies a Cram\\'er or a convolution equivalent condition. We derive these asymptotics under minimal conditions in the Cram\\'er case, and compare them with known results for the convolution equivalent case, drawing attention to the striking and unexpected fact that they become identical when certain parameters tend to equality. Thus, at least regarding these quantities, the "medium-heavy" tailed convolution equivalent model segues into the "light-tailed" Cram\\'er model in a natural way. This suggests a usefully expanded flexibility for modelling the insurance risk process. We illustrate this relationship by comparing the asymptotic distributions obtained for the overshoot and undershoots, assuming the L\\'evy p...

  3. Generating Poetry Title Based on Semantic Relevance with Convolutional Neural Network

    Science.gov (United States)

    Li, Z.; Niu, K.; He, Z. Q.

    2017-09-01

    Several approaches have been proposed to automatically generate Chinese classical poetry (CCP) in the past few years, but automatically generating the title of CCP is still a difficult problem. The difficulties are mainly reflected in two aspects. First, the words used in CCP are very different from modern Chinese words and there are no valid word segmentation tools. Second, the semantic relevance of characters in CCP not only exists in one sentence but also exists between the same positions of adjacent sentences, which is hard to grasp by the traditional text summarization models. In this paper, we propose an encoder-decoder model for generating the title of CCP. Our model encoder is a convolutional neural network (CNN) with two kinds of filters. To capture the commonly used words in one sentence, one kind of filters covers two characters horizontally at each step. The other covers two characters vertically at each step and can grasp the semantic relevance of characters between adjacent sentences. Experimental results show that our model is better than several other related models and can capture the semantic relevance of CCP more accurately.

  4. [An improvement on the two-dimensional convolution method of image reconstruction and its application to SPECT].

    Science.gov (United States)

    Suzuki, S; Arai, H

    1990-04-01

    In single-photon emission computed tomography (SPECT) and X-ray CT one-dimensional (1-D) convolution method is used for their image reconstruction from projections. The method makes a 1-D convolution filtering on projection data with a 1-D filter in the space domain, and back projects the filtered data for reconstruction. Images can also be reconstructed by first forming the 2-D backprojection images from projections and then convoluting them with a 2-D space-domain filter. This is the reconstruction by the 2-D convolution method, and it has the opposite reconstruction process to the 1-D convolution method. Since the 2-D convolution method is inferior to the 1-D convolution method in speed in reconstruction, it has no practical use. In the actual reconstruction by the 2-D convolution method, convolution is made on a finite plane which is called convolution window. A convolution window of size N X N needs a 2-D discrete filter of the same size. If better reconstructions are achieved with small convolution windows, the reconstruction time for the 2-D convolution method can be reduced. For this purpose, 2-D filters of a simple function form are proposed which can give good reconstructions with small convolution windows. They are here defined on a finite plane, depending on the window size used, although a filter function is usually defined on the infinite plane. They are however set so that they better approximate the property of a 2-D filter function defined on the infinite plane. Filters of size N X N are thus determined. Their value varies with window size. The filters are applied to image reconstructions of SPECT.(ABSTRACT TRUNCATED AT 250 WORDS)

  5. A new convolution algorithm for loss probablity analysis in multiservice networks

    DEFF Research Database (Denmark)

    Huang, Qian; Ko, King-Tim; Iversen, Villy Bæk

    2011-01-01

    Performance analysis in multiservice loss systems generally focuses on accurate and efficient calculation methods for traffic loss probability. Convolution algorithm is one of the existing efficient numerical methods. Exact loss probabilities are obtainable from the convolution algorithm in systems...... where the bandwidth is fully shared by all traffic classes; but not available for systems with trunk reservation, i.e. part of the bandwidth is reserved for a special class of traffic. A proposal known as asymmetric convolution algorithm (ACA) has been made to overcome the deficiency of the convolution...... algorithm. It obtains an approximation of the channel occupancy distribution in multiservice systems with trunk reservation. However, the ACA approximation is only accurate with two traffic flows; increased approximation errors are observed for systems with three or more traffic flows. In this paper, we...

  6. Time-Domain Convolutive Blind Source Separation Employing Selective-Tap Adaptive Algorithms

    Directory of Open Access Journals (Sweden)

    Pan Qiongfeng

    2007-01-01

    Full Text Available We investigate novel algorithms to improve the convergence and reduce the complexity of time-domain convolutive blind source separation (BSS algorithms. First, we propose MMax partial update time-domain convolutive BSS (MMax BSS algorithm. We demonstrate that the partial update scheme applied in the MMax LMS algorithm for single channel can be extended to multichannel time-domain convolutive BSS with little deterioration in performance and possible computational complexity saving. Next, we propose an exclusive maximum selective-tap time-domain convolutive BSS algorithm (XM BSS that reduces the interchannel coherence of the tap-input vectors and improves the conditioning of the autocorrelation matrix resulting in improved convergence rate and reduced misalignment. Moreover, the computational complexity is reduced since only half of the tap inputs are selected for updating. Simulation results have shown a significant improvement in convergence rate compared to existing techniques.

  7. Directional Radiometry and Radiative Transfer: the Convoluted Path From Centuries-old Phenomenology to Physical Optics

    Science.gov (United States)

    Mishchenko, Michael I.

    2014-01-01

    This Essay traces the centuries-long history of the phenomenological disciplines of directional radiometry and radiative transfer in turbid media, discusses their fundamental weaknesses, and outlines the convoluted process of their conversion into legitimate branches of physical optics.

  8. Scene Text Detection and Segmentation based on Cascaded Convolution Neural Networks.

    Science.gov (United States)

    Tang, Youbao; Wu, Xiangqian

    2017-01-20

    Scene text detection and segmentation are two important and challenging research problems in the field of computer vision. This paper proposes a novel method for scene text detection and segmentation based on cascaded convolution neural networks (CNNs). In this method, a CNN based text-aware candidate text region (CTR) extraction model (named detection network, DNet) is designed and trained using both the edges and the whole regions of text, with which coarse CTRs are detected. A CNN based CTR refinement model (named segmentation network, SNet) is then constructed to precisely segment the coarse CTRs into text to get the refined CTRs. With DNet and SNet, much fewer CTRs are extracted than with traditional approaches while more true text regions are kept. The refined CTRs are finally classified using a CNN based CTR classification model (named classification network, CNet) to get the final text regions. All of these CNN based models are modified from VGGNet-16. Extensive experiments on three benchmark datasets demonstrate that the proposed method achieves state-of-the-art performance and greatly outperforms other scene text detection and segmentation approaches.

  9. Relative n-widths of periodic convolution classes with NCVD-kernel and B-kernel

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In this paper,we consider the relative n-widths of two kinds of periodic convolution classes,Kp(K) and Bp(G),whose convolution kernels are NCVD-kernel K and B-kernel G. The asymptotic estimations of Kn(Kp(K),Kp(K))q and Kn(Bp(G),Bp(G))q are obtained for p=1 and ∞,1≤ q≤∞.

  10. SU-E-T-607: An Experimental Validation of Gamma Knife Based Convolution Algorithm On Solid Acrylic Anthropomorphic Phantom

    Energy Technology Data Exchange (ETDEWEB)

    Gopishankar, N; Bisht, R K [All India Institute of Medical Sciences, New Delhi (India)

    2014-06-01

    Purpose: To perform dosimetric evaluation of convolution algorithm in Gamma Knife (Perfexion Model) using solid acrylic anthropomorphic phantom. Methods: An in-house developed acrylic phantom with ion chamber insert was used for this purpose. The middle insert was designed to fit ion chamber from top(head) as well as from bottom(neck) of the phantom, henceforth measurement done at two different positions simultaneously. Leksell frame fixed to phantom simulated patient treatment. Prior to dosimetric study, hounsfield units and electron density of acrylic material were incorporated into the calibration curve in the TPS for convolution algorithm calculation. A CT scan of phantom with ion chamber (PTW Freiberg, 0.125cc) was obtained with following scanning parameters: Tube voltage-110kV, Slice thickness-1mm and FOV-240mm. Three separate single shot plans were generated in LGP TPS (Version 10.1.) with collimators 16mm, 8mm and 4mm respectively for both ion chamber positions. Both TMR10 and Convolution algorithm based planning (CABP) were used for dose calculation. A dose of 6Gy at 100% isodose was prescribed at centre of ion chamber visible in the CT scan. The phantom with ion chamber was positioned in the treatment couch for dose delivery. Results: The ion chamber measured dose was 5.98Gy for 16mm collimator shot plan with less than 1% deviation for convolution algorithm whereas with TMR10 measured dose was 5.6Gy. For 8mm and 4mm collimator plan merely a dose of 3.86Gy and 2.18Gy respectively were delivered at TPS calculated time for CABP. Conclusion: CABP is expected to perform accurate prediction of time for dose delivery for all collimators, but significant variation in measured dose was observed for 8mm and 4mm collimator which may be due collimator size effect. Effect of metal artifacts caused by pins and frame on the CT scan also may have role in misinterpreting CABP. The study carried out requires further investigation.

  11. Examples of minimal-memory, non-catastrophic quantum convolutional encoders

    CERN Document Server

    Wilde, Mark M; Hosseini-Khayat, Saied

    2010-01-01

    One of the most important open questions in the theory of quantum convolutional coding is to determine a minimal-memory, non-catastrophic, polynomial-depth convolutional encoder for an arbitrary quantum convolutional code. Here, we present a technique that finds quantum convolutional encoders with such desirable properties for several example quantum convolutional codes (an exposition of our technique in full generality will appear elsewhere). We first show how to encode the well-studied Forney-Grassl-Guha (FGG) code with an encoder that exploits just one memory qubit (the former Grassl-Roetteler encoder requires 15 memory qubits). We then show how our technique can find an online decoder corresponding to this encoder, and we also detail the operation of our technique on a different example of a quantum convolutional code. Finally, the reduction in memory for the FGG encoder makes it feasible to simulate the performance of a quantum turbo code employing it, and we present the results of such simulations.

  12. DEVELOPMENT OF OPTIMAL FILTERS OBTAINED THROUGH CONVOLUTION METHODS, USED FOR FINGERPRINT IMAGE ENHANCEMENT AND RESTORATION

    Directory of Open Access Journals (Sweden)

    Cătălin LUPU

    2014-12-01

    Full Text Available This article presents the development of optimal filters through covolution methods, necessary for restoring, correcting and improving fingerprints acquired from a sensor, able to provide the most ideal image in the output. After the image was binarized and equalized, Canny filter is applied in order to: eliminate the noise (filtering the image with a Gaussian filter, non-maxima suppression, module gradient adaptive binarization and extension edge points edges by hysteresis. The resulting image after applying Canny filter is not ideal. It is possible that the result will be an image with very fragmented edges and many pores in ridge. For the resulting image, a bank of convolution filters are applied one after another (Kirsch, Laplace, Roberts, Prewitt, Sobel, Frei-Chen, averaging convolution filter, circular convolution filter, lapacian convolution filter, gaussian convolution filter, LoG convolution filter, DoG, inverted filters, Wiener, the filter of ”equalization of the power spectrum” (intermediary filter between the Wiener filter and the inverted filter, the geometrical average filter , etc. with different features.

  13. A deep convolutional neural network for recognizing foods

    Science.gov (United States)

    Jahani Heravi, Elnaz; Habibi Aghdam, Hamed; Puig, Domenec

    2015-12-01

    Controlling the food intake is an efficient way that each person can undertake to tackle the obesity problem in countries worldwide. This is achievable by developing a smartphone application that is able to recognize foods and compute their calories. State-of-art methods are chiefly based on hand-crafted feature extraction methods such as HOG and Gabor. Recent advances in large-scale object recognition datasets such as ImageNet have revealed that deep Convolutional Neural Networks (CNN) possess more representation power than the hand-crafted features. The main challenge with CNNs is to find the appropriate architecture for each problem. In this paper, we propose a deep CNN which consists of 769; 988 parameters. Our experiments show that the proposed CNN outperforms the state-of-art methods and improves the best result of traditional methods 17%. Moreover, using an ensemble of two CNNs that have been trained two different times, we are able to improve the classification performance 21:5%.

  14. Convolutional networks for fast, energy-efficient neuromorphic computing.

    Science.gov (United States)

    Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S

    2016-10-11

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

  15. Method for Veterbi decoding of large constraint length convolutional codes

    Science.gov (United States)

    Hsu, In-Shek; Truong, Trieu-Kie; Reed, Irving S.; Jing, Sun

    1988-05-01

    A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.

  16. Gearbox Fault Identification and Classification with Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    ZhiQiang Chen

    2015-01-01

    Full Text Available Vibration signals of gearbox are sensitive to the existence of the fault. Based on vibration signals, this paper presents an implementation of deep learning algorithm convolutional neural network (CNN used for fault identification and classification in gearboxes. Different combinations of condition patterns based on some basic fault conditions are considered. 20 test cases with different combinations of condition patterns are used, where each test case includes 12 combinations of different basic condition patterns. Vibration signals are preprocessed using statistical measures from the time domain signal such as standard deviation, skewness, and kurtosis. In the frequency domain, the spectrum obtained with FFT is divided into multiple bands, and the root mean square (RMS value is calculated for each one so the energy maintains its shape at the spectrum peaks. The achieved accuracy indicates that the proposed approach is highly reliable and applicable in fault diagnosis of industrial reciprocating machinery. Comparing with peer algorithms, the present method exhibits the best performance in the gearbox fault diagnosis.

  17. Multi-modal vertebrae recognition using Transformed Deep Convolution Network.

    Science.gov (United States)

    Cai, Yunliang; Landis, Mark; Laidley, David T; Kornecki, Anat; Lum, Andrea; Li, Shuo

    2016-07-01

    Automatic vertebra recognition, including the identification of vertebra locations and naming in multiple image modalities, are highly demanded in spinal clinical diagnoses where large amount of imaging data from various of modalities are frequently and interchangeably used. However, the recognition is challenging due to the variations of MR/CT appearances or shape/pose of the vertebrae. In this paper, we propose a method for multi-modal vertebra recognition using a novel deep learning architecture called Transformed Deep Convolution Network (TDCN). This new architecture can unsupervisely fuse image features from different modalities and automatically rectify the pose of vertebra. The fusion of MR and CT image features improves the discriminativity of feature representation and enhances the invariance of the vertebra pattern, which allows us to automatically process images from different contrast, resolution, protocols, even with different sizes and orientations. The feature fusion and pose rectification are naturally incorporated in a multi-layer deep learning network. Experiment results show that our method outperforms existing detection methods and provides a fully automatic location+naming+pose recognition for routine clinical practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. New Parallel Interference Cancellation for Convolutionally Coded CDMA Systems

    Institute of Scientific and Technical Information of China (English)

    Xu Guo-xiong; Gan Liang-cai; Huang Tian-xi

    2004-01-01

    Based on BCJR algorithm proposed by Bahl et al and linear soft decision feedback, a reduced-complexity parallel interference cancellation (simplified PIC) for convolutionally coded DS CDMA systems is proposed. By computer simulation, we compare the simplified PIC with the exact PIC. It shows that the simplified PIC can achieve the performance close to the exact PIC if the mean values of coded symbols are linearly computed in terms of the sum of initial a prior log-likelihood rate (LLR) and updated a prior LLR, while a significant performance loss will occur if the mean values of coded symbols are linearly computed in terms of the updated a prior LLR only. Meanwhile, we also compare the simplified PIC with MF receiver and conventional PICs. The simulation results show that the simplified PIC dominantly outperforms the MF receiver and conventional PICs, at signal-noise rate (SNR) of 7 dB, for example, the bit error rate is about 10-4 for the simplified PIC, which is far below that of matched-filter receiver and conventional PIC.

  19. Toward an optimal convolutional neural network for traffic sign recognition

    Science.gov (United States)

    Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec

    2015-12-01

    Convolutional Neural Networks (CNN) beat the human performance on German Traffic Sign Benchmark competition. Both the winner and the runner-up teams trained CNNs to recognize 43 traffic signs. However, both networks are not computationally efficient since they have many free parameters and they use highly computational activation functions. In this paper, we propose a new architecture that reduces the number of the parameters 27% and 22% compared with the two networks. Furthermore, our network uses Leaky Rectified Linear Units (ReLU) as the activation function that only needs a few operations to produce the result. Specifically, compared with the hyperbolic tangent and rectified sigmoid activation functions utilized in the two networks, Leaky ReLU needs only one multiplication operation which makes it computationally much more efficient than the two other functions. Our experiments on the Gertman Traffic Sign Benchmark dataset shows 0:6% improvement on the best reported classification accuracy while it reduces the overall number of parameters 85% compared with the winner network in the competition.

  20. Convolutional neural networks for P300 detection with application to brain-computer interfaces.

    Science.gov (United States)

    Cecotti, Hubert; Gräser, Axel

    2011-03-01

    A Brain-Computer Interface (BCI) is a specific type of human-computer interface that enables the direct communication between human and computers by analyzing brain measurements. Oddball paradigms are used in BCI to generate event-related potentials (ERPs), like the P300 wave, on targets selected by the user. A P300 speller is based on this principle, where the detection of P300 waves allows the user to write characters. The P300 speller is composed of two classification problems. The first classification is to detect the presence of a P300 in the electroencephalogram (EEG). The second one corresponds to the combination of different P300 responses for determining the right character to spell. A new method for the detection of P300 waves is presented. This model is based on a convolutional neural network (CNN). The topology of the network is adapted to the detection of P300 waves in the time domain. Seven classifiers based on the CNN are proposed: four single classifiers with different features set and three multiclassifiers. These models are tested and compared on the Data set II of the third BCI competition. The best result is obtained with a multiclassifier solution with a recognition rate of 95.5 percent, without channel selection before the classification. The proposed approach provides also a new way for analyzing brain activities due to the receptive field of the CNN models.

  1. Deep-HiTS: Rotation Invariant Convolutional Neural Network for Transient Detection

    Science.gov (United States)

    Cabrera-Vives, Guillermo; Reyes, Ignacio; Förster, Francisco; Estévez, Pablo A.; Maureira, Juan-Carlos

    2017-02-01

    We introduce Deep-HiTS, a rotation-invariant convolutional neural network (CNN) model for classifying images of transient candidates into artifacts or real sources for the High cadence Transient Survey (HiTS). CNNs have the advantage of learning the features automatically from the data while achieving high performance. We compare our CNN model against a feature engineering approach using random forests (RFs). We show that our CNN significantly outperforms the RF model, reducing the error by almost half. Furthermore, for a fixed number of approximately 2000 allowed false transient candidates per night, we are able to reduce the misclassified real transients by approximately one-fifth. To the best of our knowledge, this is the first time CNNs have been used to detect astronomical transient events. Our approach will be very useful when processing images from next generation instruments such as the Large Synoptic Survey Telescope. We have made all our code and data available to the community for the sake of allowing further developments and comparisons at https://github.com/guille-c/Deep-HiTS. Deep-HiTS is licensed under the terms of the GNU General Public License v3.0.

  2. Image Classification Using Biomimetic Pattern Recognition with Convolutional Neural Networks Features

    Science.gov (United States)

    Huo, Guanying

    2017-01-01

    As a typical deep-learning model, Convolutional Neural Networks (CNNs) can be exploited to automatically extract features from images using the hierarchical structure inspired by mammalian visual system. For image classification tasks, traditional CNN models employ the softmax function for classification. However, owing to the limited capacity of the softmax function, there are some shortcomings of traditional CNN models in image classification. To deal with this problem, a new method combining Biomimetic Pattern Recognition (BPR) with CNNs is proposed for image classification. BPR performs class recognition by a union of geometrical cover sets in a high-dimensional feature space and therefore can overcome some disadvantages of traditional pattern recognition. The proposed method is evaluated on three famous image classification benchmarks, that is, MNIST, AR, and CIFAR-10. The classification accuracies of the proposed method for the three datasets are 99.01%, 98.40%, and 87.11%, respectively, which are much higher in comparison with the other four methods in most cases. PMID:28316614

  3. Detailed investigation of Long-Period activity at Campi Flegrei by Convolutive Independent Component Analysis

    Science.gov (United States)

    Capuano, P.; De Lauro, E.; De Martino, S.; Falanga, M.

    2016-04-01

    This work is devoted to the analysis of seismic signals continuously recorded at Campi Flegrei Caldera (Italy) during the entire year 2006. The radiation pattern associated with the Long-Period energy release is investigated. We adopt an innovative Independent Component Analysis algorithm for convolutive seismic series adapted and improved to give automatic procedures for detecting seismic events often buried in the high-level ambient noise. The extracted waveforms characterized by an improved signal-to-noise ratio allows the recognition of Long-Period precursors, evidencing that the seismic activity accompanying the mini-uplift crisis (in 2006), which climaxed in the three days from 26-28 October, had already started at the beginning of the month of October and lasted until mid of November. Hence, a more complete seismic catalog is then provided which can be used to properly quantify the seismic energy release. To better ground our results, we first check the robustness of the method by comparing it with other blind source separation methods based on higher order statistics; secondly, we reconstruct the radiation patterns of the extracted Long-Period events in order to link the individuated signals directly to the sources. We take advantage from Convolutive Independent Component Analysis that provides basic signals along the three directions of motion so that a direct polarization analysis can be performed with no other filtering procedures. We show that the extracted signals are mainly composed of P waves with radial polarization pointing to the seismic source of the main LP swarm, i.e. a small area in the Solfatara, also in the case of the small-events, that both precede and follow the main activity. From a dynamical point of view, they can be described by two degrees of freedom, indicating a low-level of complexity associated with the vibrations from a superficial hydrothermal system. Our results allow us to move towards a full description of the complexity of

  4. A search for concentric rings with unusual variance in the 7-year WMAP temperature maps using a fast convolution approach

    CERN Document Server

    Bielewicz, P; Banday, A J

    2012-01-01

    We present a method for the computation of the variance of cosmic microwave background (CMB) temperature maps on azimuthally symmetric patches using a fast convolution approach. As an example of the application of the method, we show results for the search for concentric rings with unusual variance in the 7-year WMAP data. We re-analyse claims concerning the unusual variance profile of rings centred at two locations on the sky that have recently drawn special attention in the context of the conformal cyclic cosmology scenario proposed by Penrose (2009). We extend this analysis to rings with larger radii and centred on other points of the sky. Using the fast convolution technique enables us to perform this search with higher resolution and a wider range of radii than in previous studies. We show that for one of the two special points rings with radii larger than 10 degrees have systematically lower variance in comparison to the concordance LambdaCDM model predictions. However, we show that this deviation is ca...

  5. A method for medulloblastoma tumor differentiation based on convolutional neural networks and transfer learning

    Science.gov (United States)

    Cruz-Roa, Angel; Arévalo, John; Judkins, Alexander; Madabhushi, Anant; González, Fabio

    2015-12-01

    Convolutional neural networks (CNN) have been very successful at addressing different computer vision tasks thanks to their ability to learn image representations directly from large amounts of labeled data. Features learned from a dataset can be used to represent images from a different dataset via an approach called transfer learning. In this paper we apply transfer learning to the challenging task of medulloblastoma tumor differentiation. We compare two different CNN models which were previously trained in two different domains (natural and histopathology images). The first CNN is a state-of-the-art approach in computer vision, a large and deep CNN with 16-layers, Visual Geometry Group (VGG) CNN. The second (IBCa-CNN) is a 2-layer CNN trained for invasive breast cancer tumor classification. Both CNNs are used as visual feature extractors of histopathology image regions of anaplastic and non-anaplastic medulloblastoma tumor from digitized whole-slide images. The features from the two models are used, separately, to train a softmax classifier to discriminate between anaplastic and non-anaplastic medulloblastoma image regions. Experimental results show that the transfer learning approach produce competitive results in comparison with the state of the art approaches for IBCa detection. Results also show that features extracted from the IBCa-CNN have better performance in comparison with features extracted from the VGG-CNN. The former obtains 89.8% while the latter obtains 76.6% in terms of average accuracy.

  6. Lung nodule detection using 3D convolutional neural networks trained on weakly labeled data

    Science.gov (United States)

    Anirudh, Rushil; Thiagarajan, Jayaraman J.; Bremer, Timo; Kim, Hyojin

    2016-03-01

    Early detection of lung nodules is currently the one of the most effective ways to predict and treat lung cancer. As a result, the past decade has seen a lot of focus on computer aided diagnosis (CAD) of lung nodules, whose goal is to efficiently detect, segment lung nodules and classify them as being benign or malignant. Effective detection of such nodules remains a challenge due to their arbitrariness in shape, size and texture. In this paper, we propose to employ 3D convolutional neural networks (CNN) to learn highly discriminative features for nodule detection in lieu of hand-engineered ones such as geometric shape or texture. While 3D CNNs are promising tools to model the spatio-temporal statistics of data, they are limited by their need for detailed 3D labels, which can be prohibitively expensive when compared obtaining 2D labels. Existing CAD methods rely on obtaining detailed labels for lung nodules, to train models, which is also unrealistic and time consuming. To alleviate this challenge, we propose a solution wherein the expert needs to provide only a point label, i.e., the central pixel of of the nodule, and its largest expected size. We use unsupervised segmentation to grow out a 3D region, which is used to train the CNN. Using experiments on the SPIE-LUNGx dataset, we show that the network trained using these weak labels can produce reasonably low false positive rates with a high sensitivity, even in the absence of accurate 3D labels.

  7. Intervertebral disc segmentation in MR images with 3D convolutional networks

    Science.gov (United States)

    Korez, Robert; Ibragimov, Bulat; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2017-02-01

    The vertebral column is a complex anatomical construct, composed of vertebrae and intervertebral discs (IVDs) supported by ligaments and muscles. During life, all components undergo degenerative changes, which may in some cases cause severe, chronic and debilitating low back pain. The main diagnostic challenge is to locate the pain generator, and degenerated IVDs have been identified to act as such. Accurate and robust segmentation of IVDs is therefore a prerequisite for computer-aided diagnosis and quantification of IVD degeneration, and can be also used for computer-assisted planning and simulation in spinal surgery. In this paper, we present a novel fully automated framework for supervised segmentation of IVDs from three-dimensional (3D) magnetic resonance (MR) spine images. By considering global intensity appearance and local shape information, a landmark-based approach is first used for the detection of IVDs in the observed image, which then initializes the segmentation of IVDs by coupling deformable models with convolutional networks (ConvNets). For this purpose, a 3D ConvNet architecture was designed that learns rich high-level appearance representations from a training repository of IVDs, and then generates spatial IVD probability maps that guide deformable models towards IVD boundaries. By applying the proposed framework to 15 3D MR spine images containing 105 IVDs, quantitative comparison of the obtained against reference IVD segmentations yielded an overall mean Dice coefficient of 92.8%, mean symmetric surface distance of 0.4 mm and Hausdorff surface distance of 3.7 mm.

  8. Research on image retrieval using deep convolutional neural network combining L1 regularization and PRelu activation function

    Science.gov (United States)

    QingJie, Wei; WenBin, Wang

    2017-06-01

    In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval

  9. A convolution-superposition dose calculation engine for GPUs

    Energy Technology Data Exchange (ETDEWEB)

    Hissoiny, Sami; Ozell, Benoit; Despres, Philippe [Departement de genie informatique et genie logiciel, Ecole polytechnique de Montreal, 2500 Chemin de Polytechnique, Montreal, Quebec H3T 1J4 (Canada); Departement de radio-oncologie, CRCHUM-Centre hospitalier de l' Universite de Montreal, 1560 rue Sherbrooke Est, Montreal, Quebec H2L 4M1 (Canada)

    2010-03-15

    Purpose: Graphic processing units (GPUs) are increasingly used for scientific applications, where their parallel architecture and unprecedented computing power density can be exploited to accelerate calculations. In this paper, a new GPU implementation of a convolution/superposition (CS) algorithm is presented. Methods: This new GPU implementation has been designed from the ground-up to use the graphics card's strengths and to avoid its weaknesses. The CS GPU algorithm takes into account beam hardening, off-axis softening, kernel tilting, and relies heavily on raytracing through patient imaging data. Implementation details are reported as well as a multi-GPU solution. Results: An overall single-GPU acceleration factor of 908x was achieved when compared to a nonoptimized version of the CS algorithm implemented in PlanUNC in single threaded central processing unit (CPU) mode, resulting in approximatively 2.8 s per beam for a 3D dose computation on a 0.4 cm grid. A comparison to an established commercial system leads to an acceleration factor of approximately 29x or 0.58 versus 16.6 s per beam in single threaded mode. An acceleration factor of 46x has been obtained for the total energy released per mass (TERMA) calculation and a 943x acceleration factor for the CS calculation compared to PlanUNC. Dose distributions also have been obtained for a simple water-lung phantom to verify that the implementation gives accurate results. Conclusions: These results suggest that GPUs are an attractive solution for radiation therapy applications and that careful design, taking the GPU architecture into account, is critical in obtaining significant acceleration factors. These results potentially can have a significant impact on complex dose delivery techniques requiring intensive dose calculations such as intensity-modulated radiation therapy (IMRT) and arc therapy. They also are relevant for adaptive radiation therapy where dose results must be obtained rapidly.

  10. Deep convolutional networks for pancreas segmentation in CT imaging

    Science.gov (United States)

    Roth, Holger R.; Farag, Amal; Lu, Le; Turkbey, Evrim B.; Summers, Ronald M.

    2015-03-01

    Automatic organ segmentation is an important prerequisite for many computer-aided diagnosis systems. The high anatomical variability of organs in the abdomen, such as the pancreas, prevents many segmentation methods from achieving high accuracies when compared to state-of-the-art segmentation of organs like the liver, heart or kidneys. Recently, the availability of large annotated training sets and the accessibility of affordable parallel computing resources via GPUs have made it feasible for "deep learning" methods such as convolutional networks (ConvNets) to succeed in image classification tasks. These methods have the advantage that used classification features are trained directly from the imaging data. We present a fully-automated bottom-up method for pancreas segmentation in computed tomography (CT) images of the abdomen. The method is based on hierarchical coarse-to-fine classification of local image regions (superpixels). Superpixels are extracted from the abdominal region using Simple Linear Iterative Clustering (SLIC). An initial probability response map is generated, using patch-level confidences and a two-level cascade of random forest classifiers, from which superpixel regions with probabilities larger 0.5 are retained. These retained superpixels serve as a highly sensitive initial input of the pancreas and its surroundings to a ConvNet that samples a bounding box around each superpixel at different scales (and random non-rigid deformations at training time) in order to assign a more distinct probability of each superpixel region being pancreas or not. We evaluate our method on CT images of 82 patients (60 for training, 2 for validation, and 20 for testing). Using ConvNets we achieve maximum Dice scores of an average 68% +/- 10% (range, 43-80%) in testing. This shows promise for accurate pancreas segmentation, using a deep learning approach and compares favorably to state-of-the-art methods.

  11. Convolutional neural networks for prostate cancer recurrence prediction

    Science.gov (United States)

    Kumar, Neeraj; Verma, Ruchika; Arora, Ashish; Kumar, Abhay; Gupta, Sanchit; Sethi, Amit; Gann, Peter H.

    2017-03-01

    Accurate prediction of the treatment outcome is important for cancer treatment planning. We present an approach to predict prostate cancer (PCa) recurrence after radical prostatectomy using tissue images. We used a cohort whose case vs. control (recurrent vs. non-recurrent) status had been determined using post-treatment follow up. Further, to aid the development of novel biomarkers of PCa recurrence, cases and controls were paired based on matching of other predictive clinical variables such as Gleason grade, stage, age, and race. For this cohort, tissue resection microarray with up to four cores per patient was available. The proposed approach is based on deep learning, and its novelty lies in the use of two separate convolutional neural networks (CNNs) - one to detect individual nuclei even in the crowded areas, and the other to classify them. To detect nuclear centers in an image, the first CNN predicts distance transform of the underlying (but unknown) multi-nuclear map from the input HE image. The second CNN classifies the patches centered at nuclear centers into those belonging to cases or controls. Voting across patches extracted from image(s) of a patient yields the probability of recurrence for the patient. The proposed approach gave 0.81 AUC for a sample of 30 recurrent cases and 30 non-recurrent controls, after being trained on an independent set of 80 case-controls pairs. If validated further, such an approach might help in choosing between a combination of treatment options such as active surveillance, radical prostatectomy, radiation, and hormone therapy. It can also generalize to the prediction of treatment outcomes in other cancers.

  12. Classifying Radio Galaxies with the Convolutional Neural Network

    Science.gov (United States)

    Aniyan, A. K.; Thorat, K.

    2017-06-01

    We present the application of a deep machine learning technique to classify radio images of extended sources on a morphological basis using convolutional neural networks (CNN). In this study, we have taken the case of the Fanaroff-Riley (FR) class of radio galaxies as well as radio galaxies with bent-tailed morphology. We have used archival data from the Very Large Array (VLA)—Faint Images of the Radio Sky at Twenty Centimeters survey and existing visually classified samples available in the literature to train a neural network for morphological classification of these categories of radio sources. Our training sample size for each of these categories is ˜200 sources, which has been augmented by rotated versions of the same. Our study shows that CNNs can classify images of the FRI and FRII and bent-tailed radio galaxies with high accuracy (maximum precision at 95%) using well-defined samples and a “fusion classifier,” which combines the results of binary classifications, while allowing for a mechanism to find sources with unusual morphologies. The individual precision is highest for bent-tailed radio galaxies at 95% and is 91% and 75% for the FRI and FRII classes, respectively, whereas the recall is highest for FRI and FRIIs at 91% each, while the bent-tailed class has a recall of 79%. These results show that our results are comparable to that of manual classification, while being much faster. Finally, we discuss the computational and data-related challenges associated with the morphological classification of radio galaxies with CNNs.

  13. An Additive and Convolutive Bias Compensation Algorithm for Telephone Speech Recognition1)

    Institute of Scientific and Technical Information of China (English)

    HANZhao-Bing; ZHANGShu-Wu; XUBo; HUANGTai-Yi

    2004-01-01

    A Vector piecewise polynomial (VPP) approximation algorithm is proposed for environ-ment compensation of speech signals degraded by both additive and convolutive noises. By investi-gating the model of the telephone environment, we propose a piecewise polynomial, namely twolinear polynomials and a quadratic polynomial, to approximate the environment function precisely.The VPP is applied either to the stationary noise, or to the non stationary noise. In the first case,the batch EM is used in log-spectral domain; in the second case the recursive EM with iterativestochastic approximation is developed in cepstral domain. Both approaches are based on the mini-mum mean squared error (MMSE) sense. Experimental results are presented on the application ofthis approach in improving the performance of Mandarin large vocabulary continuous speech recog-nition (LVCSR) due to the background noises and different transmission channels (such as fixedtelephone line and GSM). The method can reduce the average character error rate (CER) by a-bout 18%.

  14. Using GPU convolutions to correct optical distortion in closed-loop real-time missile simulations

    Science.gov (United States)

    Fronckowiak, Thomas, Jr.

    2009-05-01

    U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) has long been a leader in in-band high fidelity scientific scene generation. Recent efforts to harness and exploit the parallel power of the Graphics Processor Unit (GPU), for both graphics and general purpose processing, have been paramount. The emergence of sophisticated image generation software packages, such as the Common Scene Generator (CSG) and the Joint Signature Image Generator (JSIG), have lead to a sharp increase in the performance of digital simulations and signal injection and projection systems in both tactical and strategic programs. One area of missile simulations that benefits from this technology is real-time modeling of optical effects, such as seeker dome distortion, glint, blurring effects, and correcting for facility misalignment and distortion. This paper discusses the on-going research of applying convolution filters to the GPU multi-pass rendering process to compensate for spatial distortion in the optical projection path for synthetic environments.

  15. A Convolutional Neural Network Approach for Assisting Avalanche Search and Rescue Operations with UAV Imagery

    Directory of Open Access Journals (Sweden)

    Mesay Belete Bejiga

    2017-01-01

    Full Text Available Following an avalanche, one of the factors that affect victims’ chance of survival is the speed with which they are located and dug out. Rescue teams use techniques like trained rescue dogs and electronic transceivers to locate victims. However, the resources and time required to deploy rescue teams are major bottlenecks that decrease a victim’s chance of survival. Advances in the field of Unmanned Aerial Vehicles (UAVs have enabled the use of flying robots equipped with sensors like optical cameras to assess the damage caused by natural or manmade disasters and locate victims in the debris. In this paper, we propose assisting avalanche search and rescue (SAR operations with UAVs fitted with vision cameras. The sequence of images of the avalanche debris captured by the UAV is processed with a pre-trained Convolutional Neural Network (CNN to extract discriminative features. A trained linear Support Vector Machine (SVM is integrated at the top of the CNN to detect objects of interest. Moreover, we introduce a pre-processing method to increase the detection rate and a post-processing method based on a Hidden Markov Model to improve the prediction performance of the classifier. Experimental results conducted on two different datasets at different levels of resolution show that the detection performance increases with an increase in resolution, while the computation time increases. Additionally, they also suggest that a significant decrease in processing time can be achieved thanks to the pre-processing step.

  16. Guided filter and convolutional network based tracking for infrared dim moving target

    Science.gov (United States)

    Qian, Kun; Zhou, Huixin; Qin, Hanlin; Rong, Shenghui; Zhao, Dong; Du, Juan

    2017-09-01

    The dim moving target usually submerges in strong noise, and its motion observability is debased by numerous false alarms for low signal-to-noise ratio. A tracking algorithm that integrates the Guided Image Filter (GIF) and the Convolutional neural network (CNN) into the particle filter framework is presented to cope with the uncertainty of dim targets. First, the initial target template is treated as a guidance to filter incoming templates depending on similarities between the guidance and candidate templates. The GIF algorithm utilizes the structure in the guidance and performs as an edge-preserving smoothing operator. Therefore, the guidance helps to preserve the detail of valuable templates and makes inaccurate ones blurry, alleviating the tracking deviation effectively. Besides, the two-layer CNN method is adopted to obtain a powerful appearance representation. Subsequently, a Bayesian classifier is trained with these discriminative yet strong features. Moreover, an adaptive learning factor is introduced to prevent the update of classifier's parameters when a target undergoes sever background. At last, classifier responses of particles are utilized to generate particle importance weights and a re-sample procedure preserves samples according to the weight. In the predication stage, a 2-order transition model considers the target velocity to estimate current position. Experimental results demonstrate that the presented algorithm outperforms several relative algorithms in the accuracy.

  17. Convolution-based one and two component FRAP analysis: theory and application.

    Science.gov (United States)

    Tannert, Astrid; Tannert, Sebastian; Burgold, Steffen; Schaefer, Michael

    2009-06-01

    The method of fluorescence redistribution after photobleaching (FRAP) is increasingly receiving interest in biological applications as it is nowadays used not only to determine mobility parameters per se, but to investigate dynamic changes in the concentration or distribution of diffusing molecules. Here, we develop a new simple convolution-based approach to analyze FRAP data using the whole image information. This method does not require information about the timing and localization of the bleaching event but uses the first image acquired directly after photobleaching to calculate the intensity distributions, instead. Changes in pools of molecules with different velocities, which are monitored by applying repetitive FRAP experiments within a single cell, can be analyzed by means of a global model by assuming two global diffusion coefficients with changing portions. We validate the approach by simulation and show that translocation of the YFP-fused PH-domain of phospholipase Cdelta1 can be quantitatively monitored by FRAP analysis in a time-resolved manner. The new FRAP data analysis procedure may be applied to investigate signal transduction pathways using biosensors that change their mobility. An altered mobility in response to the activation of signaling cascades may result either from an altered size of the biosensor, e.g. due to multimerization processes or from translocation of the sensor to an environment with different viscosity.

  18. Depth map upsampling using joint edge-guided convolutional neural network for virtual view synthesizing

    Science.gov (United States)

    Dong, Yan; Lin, Chunyu; Zhao, Yao; Yao, Chao

    2017-07-01

    In texture-plus-depth format of three-dimensional visual data, both texture and depth maps are required to synthesize a desired view via depth-image-based rendering. However, the depth maps captured or estimated always exist with low resolution compared to their corresponding texture images. We introduce a joint edge-guided convolutional neural network that upsamples the resolution of a depth map on the premise of synthesized view quality. The network takes the low-resolution depth map as an input using a joint edge feature extracted from the depth map and the registered texture image as a reference, and then produces a high-resolution depth map. We further use local constraints that preserve smooth regions and sharp edges so as to improve the quality of the depth map and synthesized view. Finally, a global looping optimization is performed with virtual view quality as guidance in the recovery process. We train our model using pairs of depth maps and texture images and then make tests on other depth maps and video sequences. The experimental results demonstrate that our scheme outperforms existing methods both in the quality of the depth maps and synthesized views.

  19. Potential fault region detection in TFDS images based on convolutional neural network

    Science.gov (United States)

    Sun, Junhua; Xiao, Zhongwen

    2016-10-01

    In recent years, more than 300 sets of Trouble of Running Freight Train Detection System (TFDS) have been installed on railway to monitor the safety of running freight trains in China. However, TFDS is simply responsible for capturing, transmitting, and storing images, and fails to recognize faults automatically due to some difficulties such as such as the diversity and complexity of faults and some low quality images. To improve the performance of automatic fault recognition, it is of great importance to locate the potential fault areas. In this paper, we first introduce a convolutional neural network (CNN) model to TFDS and propose a potential fault region detection system (PFRDS) for simultaneously detecting four typical types of potential fault regions (PFRs). The experimental results show that this system has a higher performance of image detection to PFRs in TFDS. An average detection recall of 98.95% and precision of 100% are obtained, demonstrating the high detection ability and robustness against various poor imaging situations.

  20. Convolution effect on TCR log response curve and the correction method for it

    Science.gov (United States)

    Chen, Q.; Liu, L. J.; Gao, J.

    2016-09-01

    Through-casing resistivity (TCR) logging has been successfully used in production wells for the dynamic monitoring of oil pools and the distribution of the residual oil, but its vertical resolution has limited its efficiency in identification of thin beds. The vertical resolution is limited by the distortion phenomenon of vertical response of TCR logging. The distortion phenomenon was studied in this work. It was found that the vertical response curve of TCR logging is the convolution of the true formation resistivity and the convolution function of TCR logging tool. Due to the effect of convolution, the measurement error at thin beds can reach 30% or even bigger. Thus the information of thin bed might be covered up very likely. The convolution function of TCR logging tool was obtained in both continuous and discrete way in this work. Through modified Lyle-Kalman deconvolution method, the true formation resistivity can be optimally estimated, so this inverse algorithm can correct the error caused by the convolution effect. Thus it can improve the vertical resolution of TCR logging tool for identification of thin beds.

  1. Data convolution and combination operation (COCOA) for motion ghost artifacts reduction.

    Science.gov (United States)

    Huang, Feng; Lin, Wei; Börnert, Peter; Li, Yu; Reykowski, Arne

    2010-07-01

    A novel method, data convolution and combination operation, is introduced for the reduction of ghost artifacts due to motion or flow during data acquisition. Since neighboring k-space data points from different coil elements have strong correlations, a new "synthetic" k-space with dispersed motion artifacts can be generated through convolution for each coil. The corresponding convolution kernel can be self-calibrated using the acquired k-space data. The synthetic and the acquired data sets can be checked for consistency to identify k-space areas that are motion corrupted. Subsequently, these two data sets can be combined appropriately to produce a k-space data set showing a reduced level of motion induced error. If the acquired k-space contains isolated error, the error can be completely eliminated through data convolution and combination operation. If the acquired k-space data contain widespread errors, the application of the convolution also significantly reduces the overall error. Results with simulated and in vivo data demonstrate that this self-calibrated method robustly reduces ghost artifacts due to swallowing, breathing, or blood flow, with a minimum impact on the image signal-to-noise ratio. (c) 2010 Wiley-Liss, Inc.

  2. Iterative sinc-convolution method for solving planar D-bar equation with application to EIT.

    Science.gov (United States)

    Abbasi, Mahdi; Naghsh-Nilchi, Ahmad-Reza

    2012-08-01

    The numerical solution of D-bar integral equations is the key in inverse scattering solution of many complex problems in science and engineering including conductivity imaging. Recently, a couple of methodologies were considered for the numerical solution of D-bar integral equation, namely product integrals and multigrid. The first one involves high computational complexity and other one has low convergence rate disadvantages. In this paper, a new and efficient sinc-convolution algorithm is introduced to solve the two-dimensional D-bar integral equation to overcome both of these disadvantages and to resolve the singularity problem not tackled before effectively. The method of sinc-convolution is based on using collocation to replace multidimensional convolution-form integrals- including the two-dimensional D-bar integral equations - by a system of algebraic equations. Separation of variables in the proposed method allows elimination of the formulation of the huge full matrices and therefore reduces the computational complexity drastically. In addition, the sinc-convolution method converges exponentially with a convergence rate of O(e-cN). Simulation results on solving a test electrical impedance tomography problem confirm the efficiency of the proposed sinc-convolution-based algorithm. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Z-Transform Techniques for Improved Real-Time Digital Simulation of Continuous Systems: Runge-Kutta Convolutions Adjusted for Unit Step Response via Pole-Residues.

    Science.gov (United States)

    1980-10-01

    shortcut is available; note that on the right-hand side of Equation (26) the first term leads to Eular Convolution and the second to Mean Value...Convolution. Eular Convolution and Mean Value Convolution are just special cases of R-K(2,a) Convolution (see Table 2). TABLE 2. SPECIAL CASES OF R-K(2,a)C...Convolution Eular 0 Mean Value for 1/2 1/2 Trapezoidal I For a single real pole filter, F(s) - 1 (28) and any input, G(s), the approximation using R-K(2

  4. Nonlocal Optical Spatial Soliton with a Non-parabolic Symmetry and Real-valued Convolution Response Kernel

    Institute of Scientific and Technical Information of China (English)

    LI Ke-Ping; YU Chao-Fan; GAO Zi-You; LIANG Guo-Dong; YU Xiao-Min

    2008-01-01

    Based on the picture of nonlinear and non-parabolic symmetry response, I.e., △n2( I) ≈ p(αo -α1x- α2x2), we propose a model for the transversal beam intensity distribution of the nonlocal spatial soliton. In this model, as a convolution response with non-parabolic symmetry, △n2( I) ≈ p(b0+b1 f - b2 f2 with b2/b1 > 0 is assumed. Furthermore, instead of the wave function Ψ, the high-order nonlinear equation for the beam intensity distribution f has been derived and the bell-shaped soliton solution with the envelope form has been obtained. The results demonstrate that, since the existence of the terms of non-parabolic response, the nonlocal spatial soliton has the bistable state solution. If thefrequency shift of wave number β satisfies 0 0 has been demonstrated.

  5. Recognition of prokaryotic and eukaryotic promoters using convolutional deep learning neural networks.

    Science.gov (United States)

    Umarov, Ramzan Kh; Solovyev, Victor V

    2017-01-01

    Accurate computational identification of promoters remains a challenge as these key DNA regulatory regions have variable structures composed of functional motifs that provide gene-specific initiation of transcription. In this paper we utilize Convolutional Neural Networks (CNN) to analyze sequence characteristics of prokaryotic and eukaryotic promoters and build their predictive models. We trained a similar CNN architecture on promoters of five distant organisms: human, mouse, plant (Arabidopsis), and two bacteria (Escherichia coli and Bacillus subtilis). We found that CNN trained on sigma70 subclass of Escherichia coli promoter gives an excellent classification of promoters and non-promoter sequences (Sn = 0.90, Sp = 0.96, CC = 0.84). The Bacillus subtilis promoters identification CNN model achieves Sn = 0.91, Sp = 0.95, and CC = 0.86. For human, mouse and Arabidopsis promoters we employed CNNs for identification of two well-known promoter classes (TATA and non-TATA promoters). CNN models nicely recognize these complex functional regions. For human promoters Sn/Sp/CC accuracy of prediction reached 0.95/0.98/0,90 on TATA and 0.90/0.98/0.89 for non-TATA promoter sequences, respectively. For Arabidopsis we observed Sn/Sp/CC 0.95/0.97/0.91 (TATA) and 0.94/0.94/0.86 (non-TATA) promoters. Thus, the developed CNN models, implemented in CNNProm program, demonstrated the ability of deep learning approach to grasp complex promoter sequence characteristics and achieve significantly higher accuracy compared to the previously developed promoter prediction programs. We also propose random substitution procedure to discover positionally conserved promoter functional elements. As the suggested approach does not require knowledge of any specific promoter features, it can be easily extended to identify promoters and other complex functional regions in sequences of many other and especially newly sequenced genomes. The CNNProm program is available to run at web server http://www.softberry.com.

  6. Recognition of prokaryotic and eukaryotic promoters using convolutional deep learning neural networks

    Science.gov (United States)

    Umarov, Ramzan Kh.

    2017-01-01

    Accurate computational identification of promoters remains a challenge as these key DNA regulatory regions have variable structures composed of functional motifs that provide gene-specific initiation of transcription. In this paper we utilize Convolutional Neural Networks (CNN) to analyze sequence characteristics of prokaryotic and eukaryotic promoters and build their predictive models. We trained a similar CNN architecture on promoters of five distant organisms: human, mouse, plant (Arabidopsis), and two bacteria (Escherichia coli and Bacillus subtilis). We found that CNN trained on sigma70 subclass of Escherichia coli promoter gives an excellent classification of promoters and non-promoter sequences (Sn = 0.90, Sp = 0.96, CC = 0.84). The Bacillus subtilis promoters identification CNN model achieves Sn = 0.91, Sp = 0.95, and CC = 0.86. For human, mouse and Arabidopsis promoters we employed CNNs for identification of two well-known promoter classes (TATA and non-TATA promoters). CNN models nicely recognize these complex functional regions. For human promoters Sn/Sp/CC accuracy of prediction reached 0.95/0.98/0,90 on TATA and 0.90/0.98/0.89 for non-TATA promoter sequences, respectively. For Arabidopsis we observed Sn/Sp/CC 0.95/0.97/0.91 (TATA) and 0.94/0.94/0.86 (non-TATA) promoters. Thus, the developed CNN models, implemented in CNNProm program, demonstrated the ability of deep learning approach to grasp complex promoter sequence characteristics and achieve significantly higher accuracy compared to the previously developed promoter prediction programs. We also propose random substitution procedure to discover positionally conserved promoter functional elements. As the suggested approach does not require knowledge of any specific promoter features, it can be easily extended to identify promoters and other complex functional regions in sequences of many other and especially newly sequenced genomes. The CNNProm program is available to run at web server http

  7. Recognition of prokaryotic and eukaryotic promoters using convolutional deep learning neural networks

    KAUST Repository

    Umarov, Ramzan Kh.

    2017-02-03

    Accurate computational identification of promoters remains a challenge as these key DNA regulatory regions have variable structures composed of functional motifs that provide gene-specific initiation of transcription. In this paper we utilize Convolutional Neural Networks (CNN) to analyze sequence characteristics of prokaryotic and eukaryotic promoters and build their predictive models. We trained a similar CNN architecture on promoters of five distant organisms: human, mouse, plant (Arabidopsis), and two bacteria (Escherichia coli and Bacillus subtilis). We found that CNN trained on sigma70 subclass of Escherichia coli promoter gives an excellent classification of promoters and non-promoter sequences (Sn = 0.90, Sp = 0.96, CC = 0.84). The Bacillus subtilis promoters identification CNN model achieves Sn = 0.91, Sp = 0.95, and CC = 0.86. For human, mouse and Arabidopsis promoters we employed CNNs for identification of two well-known promoter classes (TATA and non-TATA promoters). CNN models nicely recognize these complex functional regions. For human promoters Sn/Sp/CC accuracy of prediction reached 0.95/0.98/0,90 on TATA and 0.90/0.98/0.89 for non-TATA promoter sequences, respectively. For Arabidopsis we observed Sn/Sp/CC 0.95/0.97/0.91 (TATA) and 0.94/0.94/0.86 (non-TATA) promoters. Thus, the developed CNN models, implemented in CNNProm program, demonstrated the ability of deep learning approach to grasp complex promoter sequence characteristics and achieve significantly higher accuracy compared to the previously developed promoter prediction programs. We also propose random substitution procedure to discover positionally conserved promoter functional elements. As the suggested approach does not require knowledge of any specific promoter features, it can be easily extended to identify promoters and other complex functional regions in sequences of many other and especially newly sequenced genomes. The CNNProm program is available to run at web server http://www.softberry.com.

  8. Fast automated analysis of strong gravitational lenses with convolutional neural networks

    Science.gov (United States)

    Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.

    2017-08-01

    Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.

  9. Minimal memory requirements for pearl necklace encoders of quantum convolutional codes

    CERN Document Server

    Houshmand, Monireh; Wilde, Mark M

    2010-01-01

    One of the major goals in quantum computer science is to reduce the overhead associated with the implementation of quantum computers, and inevitably, routines for quantum error correction will account for most of this overhead. A particular technique for quantum error correction that may be useful in the outer layers of a concatenated scheme for fault tolerance is quantum convolutional coding. The encoder for a quantum convolutional code has a representation as a convolutional encoder or as a "pearl necklace" encoder. In the pearl necklace representation, it has not been particularly clear in the research literature how much quantum memory such an encoder would require for implementation. Here, we offer an algorithm that answers this question. The algorithm first constructs a weighted, directed acyclic graph where each vertex of the graph corresponds to a gate string in the pearl necklace encoder, and each path through the graph represents a non-commutative path through gates in the encoder. We show that the ...

  10. Deep Manifold Learning Combined With Convolutional Neural Networks for Action Recognition.

    Science.gov (United States)

    Chen, Xin; Weng, Jian; Lu, Wei; Xu, Jiaming; Weng, Jiasi

    2017-09-15

    Learning deep representations have been applied in action recognition widely. However, there have been a few investigations on how to utilize the structural manifold information among different action videos to enhance the recognition accuracy and efficiency. In this paper, we propose to incorporate the manifold of training samples into deep learning, which is defined as deep manifold learning (DML). The proposed DML framework can be adapted to most existing deep networks to learn more discriminative features for action recognition. When applied to a convolutional neural network, DML embeds the previous convolutional layer's manifold into the next convolutional layer; thus, the discriminative capacity of the next layer can be promoted. We also apply the DML on a restricted Boltzmann machine, which can alleviate the overfitting problem. Experimental results on four standard action databases (i.e., UCF101, HMDB51, KTH, and UCF sports) show that the proposed method outperforms the state-of-the-art methods.

  11. Image retrieval method based on metric learning for convolutional neural network

    Science.gov (United States)

    Wang, Jieyuan; Qian, Ying; Ye, Qingqing; Wang, Biao

    2017-09-01

    At present, the research of content-based image retrieval (CBIR) focuses on learning effective feature for the representations of origin images and similarity measures. The retrieval accuracy and efficiency are crucial to a CBIR. With the rise of deep learning, convolutional network is applied in the domain of image retrieval and achieved remarkable results, but the image visual feature extraction of convolutional neural network exist high dimension problems, this problem makes the image retrieval and speed ineffective. This paper uses the metric learning for the image visual features extracted from the convolutional neural network, decreased the feature redundancy, improved the retrieval performance. The work in this paper is also a necessary part for further implementation of feature hashing to the approximate-nearest-neighbor (ANN) retrieval method.

  12. Threshold Saturation via Spatial Coupling: Why Convolutional LDPC Ensembles Perform so well over the BEC

    CERN Document Server

    Kudekar, Shrinivas; Urbanke, Ruediger

    2010-01-01

    Convolutional LDPC ensembles, introduced by Felstrom and Zigangirov, have excellent thresholds and these thresholds are rapidly increasing as a function of the average degree. Several variations on the basic theme have been proposed to date, all of which share the good performance characteristics of convolutional LDPC ensembles. We describe the fundamental mechanism which explains why "convolutional-like" or "spatially coupled" codes perform so well. In essence, the spatial coupling of the individual code structure has the effect of increasing the belief-propagation (BP) threshold of the new ensemble to its maximum possible value, namely the maximum-a-posteriori (MAP) threshold of the underlying ensemble. For this reason we call this phenomenon "threshold saturation." This gives an entirely new way of approaching capacity. One significant advantage of such a construction is that one can create capacity-approaching ensembles with an error correcting radius which is increasing in the blocklength. Our proof make...

  13. Modified convolution method to reconstruct particle hologram with an elliptical Gaussian beam illumination.

    Science.gov (United States)

    Wu, Xuecheng; Wu, Yingchun; Yang, Jing; Wang, Zhihua; Zhou, Binwu; Gréhan, Gérard; Cen, Kefa

    2013-05-20

    Application of the modified convolution method to reconstruct digital inline holography of particle illuminated by an elliptical Gaussian beam is investigated. Based on the analysis on the formation of particle hologram using the Collins formula, the convolution method is modified to compensate the astigmatism by adding two scaling factors. Both simulated and experimental holograms of transparent droplets and opaque particles are used to test the algorithm, and the reconstructed images are compared with that using FRFT reconstruction. Results show that the modified convolution method can accurately reconstruct the particle image. This method has an advantage that the reconstructed images in different depth positions have the same size and resolution with the hologram. This work shows that digital inline holography has great potential in particle diagnostics in curvature containers.

  14. Fast vision through frameless event-based sensing and convolutional processing: application to texture recognition.

    Science.gov (United States)

    Perez-Carrasco, Jose Antonio; Acha, Begona; Serrano, Carmen; Camunas-Mesa, Luis; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2010-04-01

    Address-event representation (AER) is an emergent hardware technology which shows a high potential for providing in the near future a solid technological substrate for emulating brain-like processing structures. When used for vision, AER sensors and processors are not restricted to capturing and processing still image frames, as in commercial frame-based video technology, but sense and process visual information in a pixel-level event-based frameless manner. As a result, vision processing is practically simultaneous to vision sensing, since there is no need to wait for sensing full frames. Also, only meaningful information is sensed, communicated, and processed. Of special interest for brain-like vision processing are some already reported AER convolutional chips, which have revealed a very high computational throughput as well as the possibility of assembling large convolutional neural networks in a modular fashion. It is expected that in a near future we may witness the appearance of large scale convolutional neural networks with hundreds or thousands of individual modules. In the meantime, some research is needed to investigate how to assemble and configure such large scale convolutional networks for specific applications. In this paper, we analyze AER spiking convolutional neural networks for texture recognition hardware applications. Based on the performance figures of already available individual AER convolution chips, we emulate large scale networks using a custom made event-based behavioral simulator. We have developed a new event-based processing architecture that emulates with AER hardware Manjunath's frame-based feature recognition software algorithm, and have analyzed its performance using our behavioral simulator. Recognition rate performance is not degraded. However, regarding speed, we show that recognition can be achieved before an equivalent frame is fully sensed and transmitted.

  15. Learning to Monitor Machine Health with Convolutional Bi-Directional LSTM Networks

    Directory of Open Access Journals (Sweden)

    Rui Zhao

    2017-01-01

    Full Text Available In modern manufacturing systems and industries, more and more research efforts have been made in developing effective machine health monitoring systems. Among various machine health monitoring approaches, data-driven methods are gaining in popularity due to the development of advanced sensing and data analytic techniques. However, considering the noise, varying length and irregular sampling behind sensory data, this kind of sequential data cannot be fed into classification and regression models directly. Therefore, previous work focuses on feature extraction/fusion methods requiring expensive human labor and high quality expert knowledge. With the development of deep learning methods in the last few years, which redefine representation learning from raw data, a deep neural network structure named Convolutional Bi-directional Long Short-Term Memory networks (CBLSTM has been designed here to address raw sensory data. CBLSTM firstly uses CNN to extract local features that are robust and informative from the sequential input. Then, bi-directional LSTM is introduced to encode temporal information. Long Short-Term Memory networks(LSTMs are able to capture long-term dependencies and model sequential data, and the bi-directional structure enables the capture of past and future contexts. Stacked, fully-connected layers and the linear regression layer are built on top of bi-directional LSTMs to predict the target value. Here, a real-life tool wear test is introduced, and our proposed CBLSTM is able to predict the actual tool wear based on raw sensory data. The experimental results have shown that our model is able to outperform several state-of-the-art baseline methods.

  16. Learning to Monitor Machine Health with Convolutional Bi-Directional LSTM Networks

    Science.gov (United States)

    Zhao, Rui; Yan, Ruqiang; Wang, Jinjiang; Mao, Kezhi

    2017-01-01

    In modern manufacturing systems and industries, more and more research efforts have been made in developing effective machine health monitoring systems. Among various machine health monitoring approaches, data-driven methods are gaining in popularity due to the development of advanced sensing and data analytic techniques. However, considering the noise, varying length and irregular sampling behind sensory data, this kind of sequential data cannot be fed into classification and regression models directly. Therefore, previous work focuses on feature extraction/fusion methods requiring expensive human labor and high quality expert knowledge. With the development of deep learning methods in the last few years, which redefine representation learning from raw data, a deep neural network structure named Convolutional Bi-directional Long Short-Term Memory networks (CBLSTM) has been designed here to address raw sensory data. CBLSTM firstly uses CNN to extract local features that are robust and informative from the sequential input. Then, bi-directional LSTM is introduced to encode temporal information. Long Short-Term Memory networks (LSTMs) are able to capture long-term dependencies and model sequential data, and the bi-directional structure enables the capture of past and future contexts. Stacked, fully-connected layers and the linear regression layer are built on top of bi-directional LSTMs to predict the target value. Here, a real-life tool wear test is introduced, and our proposed CBLSTM is able to predict the actual tool wear based on raw sensory data. The experimental results have shown that our model is able to outperform several state-of-the-art baseline methods. PMID:28146106

  17. Learning to Monitor Machine Health with Convolutional Bi-Directional LSTM Networks.

    Science.gov (United States)

    Zhao, Rui; Yan, Ruqiang; Wang, Jinjiang; Mao, Kezhi

    2017-01-30

    In modern manufacturing systems and industries, more and more research efforts have been made in developing effective machine health monitoring systems. Among various machine health monitoring approaches, data-driven methods are gaining in popularity due to the development of advanced sensing and data analytic techniques. However, considering the noise, varying length and irregular sampling behind sensory data, this kind of sequential data cannot be fed into classification and regression models directly. Therefore, previous work focuses on feature extraction/fusion methods requiring expensive human labor and high quality expert knowledge. With the development of deep learning methods in the last few years, which redefine representation learning from raw data, a deep neural network structure named Convolutional Bi-directional Long Short-Term Memory networks (CBLSTM) has been designed here to address raw sensory data. CBLSTM firstly uses CNN to extract local features that are robust and informative from the sequential input. Then, bi-directional LSTM is introduced to encode temporal information. Long Short-Term Memory networks(LSTMs) are able to capture long-term dependencies and model sequential data, and the bi-directional structure enables the capture of past and future contexts. Stacked, fully-connected layers and the linear regression layer are built on top of bi-directional LSTMs to predict the target value. Here, a real-life tool wear test is introduced, and our proposed CBLSTM is able to predict the actual tool wear based on raw sensory data. The experimental results have shown that our model is able to outperform several state-of-the-art baseline methods.

  18. Change of Scale Formulas for Wiener Integrals Related to Fourier-Feynman Transform and Convolution

    Directory of Open Access Journals (Sweden)

    Bong Jin Kim

    2014-01-01

    Full Text Available Cameron and Storvick discovered change of scale formulas for Wiener integrals of functionals in Banach algebra S on classical Wiener space. Yoo and Skoug extended these results for functionals in the Fresnel class F(B and in a generalized Fresnel class FA1,A2 on abstract Wiener space. We express Fourier-Feynman transform and convolution product of functionals in S as limits of Wiener integrals. Moreover we obtain change of scale formulas for Wiener integrals related to Fourier-Feynman transform and convolution product of these functionals.

  19. A convolutional learning system for object classification in 3-D Lidar data.

    Science.gov (United States)

    Prokhorov, Danil

    2010-05-01

    In this brief, a convolutional learning system for classification of segmented objects represented in 3-D as point clouds of laser reflections is proposed. Several novelties are discussed: (1) extension of the existing convolutional neural network (CNN) framework to direct processing of 3-D data in a multiview setting which may be helpful for rotation-invariant consideration, (2) improvement of CNN training effectiveness by employing a stochastic meta-descent (SMD) method, and (3) combination of unsupervised and supervised training for enhanced performance of CNN. CNN performance is illustrated on a two-class data set of objects in a segmented outdoor environment.

  20. Discrete singular convolution mapping methods for solving singular boundary value and boundary layer problems

    Science.gov (United States)

    Pindza, Edson; Maré, Eben

    2017-03-01

    A modified discrete singular convolution method is proposed. The method is based on the single (SE) and double (DE) exponential transformation to speed up the convergence of the existing methods. Numerical computations are performed on a wide variety of singular boundary value and singular perturbed problems in one and two dimensions. The obtained results from discrete singular convolution methods based on single and double exponential transformations are compared with each other, and with the existing methods too. Numerical results confirm that these methods are considerably efficient and accurate in solving singular and regular problems. Moreover, the method can be applied to a wide class of nonlinear partial differential equations.

  1. Convolutional Encoder and Viterbi Decoder Using SOPC For Variable Constraint Length

    DEFF Research Database (Denmark)

    Kulkarni, Anuradha; Dnyaneshwar, Mantri; Prasad, Neeli R.;

    2013-01-01

    Convolution encoder and Viterbi decoder are the basic and important blocks in any Code Division Multiple Accesses (CDMA). They are widely used in communication system due to their error correcting capability But the performance degrades with variable constraint length. In this context to have...... detailed analysis, this paper deals with the implementation of convolution encoder and Viterbi decoder using system on programming chip (SOPC). It uses variable constraint length of 7, 8 and 9 bits for 1/2 and 1/3 code rates. By analyzing the Viterbi algorithm it is seen that our algorithm has a better...

  2. Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber

    Energy Technology Data Exchange (ETDEWEB)

    Acciarri, R.; Adams, C.; An, R.; Asaadi, J.; Auger, M.; Bagby, L.; Baller, B.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Bugel, L.; Camilleri, L.; Caratelli, D.; Carls, B.; Fernandez, R. Castillo; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anad?n, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Sanchez, L. Escudero; Esquivel, J.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; James, C.; de Vries, J. Jan; Jen, C. -M.; Jiang, L.; Johnson, R. A.; Jones, B. J. P.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Caicedo, D. A. Martinez; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; von Rohr, C. Rudolf; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Snider, E. L.; Soderberg, M.; S?ldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y. -T.; Tufanli, S.; Usher, T.; Van de Water, R. G.; Viren, B.; Weber, M.; Weston, J.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Zeller, G. P.; Zennamo, J.; Zhang, C.

    2017-03-01

    We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level.

  3. Meda Inequality for Rearrangements of the Convolution on the Heisenberg Group and Some Applications

    Directory of Open Access Journals (Sweden)

    V. S. Guliyev

    2009-01-01

    Full Text Available The Meda inequality for rearrangements of the convolution operator on the Heisenberg group ℍn is proved. By using the Meda inequality, an O'Neil-type inequality for the convolution is obtained. As applications of these results, some sufficient and necessary conditions for the boundedness of the fractional maximal operator MΩ,α and fractional integral operator IΩ,α with rough kernels in the spaces Lp(ℍn are found. Finally, we give some comments on the extension of our results to the case of homogeneous groups.

  4. Convolutional Neural Networks Applied to Neutrino Events in a Liquid Argon Time Projection Chamber

    CERN Document Server

    Acciarri, R; An, R; Asaadi, J; Auger, M; Bagby, L; Baller, B; Barr, G; Bass, M; Bay, F; Bishai, M; Blake, A; Bolton, T; Bugel, L; Camilleri, L; Caratelli, D; Carls, B; Fernandez, R Castillo; Cavanna, F; Chen, H; Church, E; Cianci, D; Collin, G H; Conrad, J M; Convery, M; Crespo-Anadón, J I; Del Tutto, M; Devitt, D; Dytman, S; Eberly, B; Ereditato, A; Sanchez, L Escudero; Esquivel, J; Fleming, B T; Foreman, W; Furmanski, A P; Garvey, G T; Genty, V; Goeldi, D; Gollapinni, S; Graf, N; Gramellini, E; Greenlee, H; Grosso, R; Guenette, R; Hackenburg, A; Hamilton, P; Hen, O; Hewes, J; Hill, C; Ho, J; Horton-Smith, G; James, C; de Vries, J Jan; Jen, C -M; Jiang, L; Johnson, R A; Jones, B J P; Joshi, J; Jostlein, H; Kaleko, D; Karagiorgi, G; Ketchum, W; Kirby, B; Kirby, M; Kobilarcik, T; Kreslo, I; Laube, A; Li, Y; Lister, A; Littlejohn, B R; Lockwitz, S; Lorca, D; Louis, W C; Luethi, M; Lundberg, B; Luo, X; Marchionni, A; Mariani, C; Marshall, J; Caicedo, D A Martinez; Meddage, V; Miceli, T; Mills, G B; Moon, J; Mooney, M; Moore, C D; Mousseau, J; Murrells, R; Naples, D; Nienaber, P; Nowak, J; Palamara, O; Paolone, V; Papavassiliou, V; Pate, S F; Pavlovic, Z; Porzio, D; Pulliam, G; Qian, X; Raaf, J L; Rafique, A; Rochester, L; von Rohr, C Rudolf; Russell, B; Schmitz, D W; Schukraft, A; Seligman, W; Shaevitz, M H; Sinclair, J; Snider, E L; Soderberg, M; Söldner-Rembold, S; Soleti, S R; Spentzouris, P; Spitz, J; John, J St; Strauss, T; Szelc, A M; Tagg, N; Terao, K; Thomson, M; Toups, M; Tsai, Y -T; Tufanli, S; Usher, T; Van de Water, R G; Viren, B; Weber, M; Weston, J; Wickremasinghe, D A; Wolbers, S; Wongjirad, T; Woodruff, K; Yang, T; Zeller, G P; Zennamo, J; Zhang, C

    2016-01-01

    We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level.

  5. Fast computation algorithm for the Rayleigh-Sommerfeld diffraction formula using a type of scaled convolution.

    Science.gov (United States)

    Nascov, Victor; Logofătu, Petre Cătălin

    2009-08-01

    We describe a fast computational algorithm able to evaluate the Rayleigh-Sommerfeld diffraction formula, based on a special formulation of the convolution theorem and the fast Fourier transform. What is new in our approach compared to other algorithms is the use of a more general type of convolution with a scale parameter, which allows for independent sampling intervals in the input and output computation windows. Comparison between the calculations made using our algorithm and direct numeric integration show a very good agreement, while the computation speed is increased by orders of magnitude.

  6. Implementation of large kernel 2-D convolution in limited FPGA resource

    Science.gov (United States)

    Zhong, Sheng; Li, Yang; Yan, Luxin; Zhang, Tianxu; Cao, Zhiguo

    2007-12-01

    2-D Convolution is a simple mathematical operation which is fundamental to many common image processing operators. Using FPGA to implement the convolver can greatly reduce the DSP's heavy burden in signal processing. But with the limit resource the FPGA can implement a convolver with small 2-D kernel. In this paper, An FIFO type line delayer is presented to serve as the data buffer for convolution to reduce the data fetching operation. A finite state machine is applied to control the reuse of multipliers and adders arrays. With these two techniques, a resource limited FPGA can be used to implement a larger kernel convolver which is commonly used in image process systems.

  7. Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram

    Science.gov (United States)

    Lee, P. J.

    1984-01-01

    The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.

  8. Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber

    Science.gov (United States)

    Acciarri, R.; Adams, C.; An, R.; Asaadi, J.; Auger, M.; Bagby, L.; Baller, B.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Bugel, L.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; James, C.; de Vries, J. Jan; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Jones, B. J. P.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; von Rohr, C. Rudolf; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van de Water, R. G.; Viren, B.; Weber, M.; Weston, J.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Zeller, G. P.; Zennamo, J.; Zhang, C.

    2017-03-01

    We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level.

  9. Investigation of dosimetric differences between the TMR 10 and convolution algorithm for Gamma Knife stereotactic radiosurgery.

    Science.gov (United States)

    Rojas-Villabona, Alvaro; Kitchen, Neil; Paddick, Ian

    2016-11-01

    Since its inception, doses applied using Gamma Knife Radiosurgery (GKR) have been calculated using a simple TMR algorithm, which assumes the patient's head is of even density, the same as water. This results in a significant approximation of the dose delivered by the Gamma Knife. We investigated how GKR dose calculations varied when using a new convolution algorithm clinically available for GKR planning that takes into account density variations in the head compared with the established calculation algorithm. Fifty-five patients undergoing GKR and harboring 85 lesions were voluntarily and prospectively enrolled into the study. Their clinical treatment plans were created and delivered using TMR 10, but were then recalculated using the density correction algorithm. Dosimetric differences between the planning algorithms were noted. Beam on time (BOT), which is directly proportional to dose, was the main value investigated. Changes of mean and maximum dose to organs at risk (OAR) were also assessed. Phantom studies were performed to investigate the effect of frame and pin materials on dose calculation using the convolution algorithm. Convolution yielded a mean increase in BOT of 7.4% (3.6%-11.6%). However, approximately 1.5% of this amount was due to the head contour being derived from the CT scans, as opposed to measurements using the Skull Scaling Instrument with TMR. Dose to the cochlea calculated with the convolution algorithm was approximately 7% lower than with the TMR 10 algorithm. No significant difference in relative dose distribution was noted and CT artifact typically caused by the stereotactic frame, glue embolization material or different fixation pin materials did not systematically affect convolution isodoses. Nonetheless, substantial error was introduced to the convolution calculation in one target located exactly in the area of major CT artifact caused by a fixation pin. Inhomogeneity correction using the convolution algorithm results in a considerable

  10. Anatomically informed convolution kernels for the projection of fMRI data on the cortical surface.

    Science.gov (United States)

    Operto, Grégory; Bulot, Rémy; Anton, Jean-Luc; Coulon, Olivier

    2006-01-01

    We present here a method that aims at producing representations of functional brain data on the cortical surface from functional MRI volumes. Such representations are required for subsequent cortical-based functional analysis. We propose a projection technique based on the definition, around each node of the grey/white matter interface mesh, of convolution kernels whose shape and distribution rely on the geometry of the local anatomy. For one anatomy, a set of convolution kernels is computed that can be used to project any functional data registered with this anatomy. The method is presented together with experiments on synthetic data and real statistical t-maps.

  11. Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution

    Energy Technology Data Exchange (ETDEWEB)

    Akabani, G.; Hawkins, W.G.; Eckblade, M.B.; Leichner, P.K. [Univ. of Nebraska Medical Center, Omaha, NE (United States)

    1999-01-01

    The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transform (CHT) algorithm.

  12. Positive convolution structure for a class of Heckman-Opdam hypergeometric functions of type BC

    CERN Document Server

    Rösler, Margit

    2009-01-01

    In this paper, we derive explicit product formulas and positive convolution structures for three continuous classes of Heckman-Opdam hypergeometric functions of type $BC$. For specific discrete series of multiplicities these hypergeometric functions occur as the spherical functions of non-compact Grassmann manifolds $G/K$ over one of the (skew) fields $\\mathbb F= \\mathbb R, \\mathbb C, \\mathbb H.$ We write the product formula of these spherical functions in an explicit form which allows analytic continuation with respect to the parameters. In each of the three cases, we obtain a series of hypergroup algebras which include the commutative convolution algebras of $K$-biinvariant functions on $G$.

  13. On uniqueness of semi-wavefronts (Diekmann-Kaper theory of a nonlinear convolution equation re-visited)

    CERN Document Server

    Aguerrea, Maitere; Trofimchuk, Sergei

    2010-01-01

    Motivated by the uniqueness problem for monostable semi-wavefronts, we propose a revised version of the Diekmann and Kaper theory of a nonlinear convolution equation. Our version of the Diekmann-Kaper theory allows 1) to consider new types of models which include nonlocal KPP type equations (with either symmetric or anisotropic dispersal), non-local lattice equations and delayed reaction-diffusion equations; 2) to incorporate the critical case (which corresponds to the slowest wavefronts) into the consideration; 3) to weaken or to remove various restrictions on kernels and nonlinearities. The results are compared with those of Schumacher (J. Reine Angew. Math. 316: 54-70, 1980), Carr and Chmaj (Proc. Amer. Math. Soc. 132: 2433-2439, 2004), and other more recent studies.

  14. DeepGait: A Learning Deep Convolutional Representation for View-Invariant Gait Recognition Using Joint Bayesian

    Directory of Open Access Journals (Sweden)

    Chao Li

    2017-02-01

    Full Text Available Human gait, as a soft biometric, helps to recognize people through their walking. To further improve the recognition performance, we propose a novel video sensor-based gait representation, DeepGait, using deep convolutional features and introduce Joint Bayesian to model view variance. DeepGait is generated by using a pre-trained “very deep” network “D-Net” (VGG-D without any fine-tuning. For non-view setting, DeepGait outperforms hand-crafted representations (e.g., Gait Energy Image, Frequency-Domain Feature and Gait Flow Image, etc.. Furthermore, for cross-view setting, 256-dimensional DeepGait after PCA significantly outperforms the state-of-the-art methods on the OU-ISR large population (OULP dataset. The OULP dataset, which includes 4007 subjects, makes our result reliable in a statistically reliable way.

  15. Error Analysis of Padding Schemes for DFT’s of Convolutions and Derivatives

    Science.gov (United States)

    2012-01-31

    spectral techniques for geoid computations over large regions. Journal of Geodesy , 70(6), 357-373. Tziavos IN, Sideris MG, Forsberg R, Schwarz KP...362- 378. Zhang C (1995) A general formula and its inverse formula for gravimetric transformations by use of convolution and deconvolution techniques. Journal of Geodesy , 70(1-2), 51-64. 24

  16. Shifting and Variational Properties for Fourier-Feynman Transform and Convolution

    Directory of Open Access Journals (Sweden)

    Byoung Soo Kim

    2015-01-01

    Full Text Available Shifting, scaling, modulation, and variational properties for Fourier-Feynman transform of functionals in a Banach algebra S are given. Cameron and Storvick's translation theorem can be obtained as a corollary of our result. We also study shifting, scaling, and modulation properties for the convolution product of functionals in S.

  17. Role of the distal convoluted tubule in renal Mg(2+) handling: molecular lessons from inherited hypomagnesemia

    NARCIS (Netherlands)

    Ferre, S.; Hoenderop, J.G.J.; Bindels, R.J.M.

    2011-01-01

    In healthy individuals, Mg(2+) homeostasis is tightly regulated by the concerted action of intestinal absorption, exchange with bone, and renal excretion. The kidney, more precisely the distal convoluted tubule (DCT), is the final determinant of plasma Mg(2+) concentrations. Positional cloning strat

  18. Upper bounds on the number of errors corrected by a convolutional code

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2004-01-01

    We derive upper bounds on the weights of error patterns that can be corrected by a convolutional code with given parameters, or equivalently we give bounds on the code rate for a given set of error patterns. The bounds parallel the Hamming bound for block codes by relating the number of error pat...

  19. Fast 2D Convolutions and Cross-Correlations Using Scalable Architectures.

    Science.gov (United States)

    Carranza, Cesar; Llamocca, Daniel; Pattichis, Marios

    2017-05-01

    The manuscript describes fast and scalable architectures and associated algorithms for computing convolutions and cross-correlations. The basic idea is to map 2D convolutions and cross-correlations to a collection of 1D convolutions and cross-correlations in the transform domain. This is accomplished through the use of the discrete periodic radon transform for general kernels and the use of singular value decomposition -LU decompositions for low-rank kernels. The approach uses scalable architectures that can be fitted into modern FPGA and Zynq-SOC devices. Based on different types of available resources, for P×P blocks, 2D convolutions and cross-correlations can be computed in just O(P) clock cycles up to O(P(2)) clock cycles. Thus, there is a trade-off between performance and required numbers and types of resources. We provide implementations of the proposed architectures using modern programmable devices (Virtex-7 and Zynq-SOC). Based on the amounts and types of required resources, we show that the proposed approaches significantly outperform current methods.

  20. New molecular players facilitating Mg(2+) reabsorption in the distal convoluted tubule.

    NARCIS (Netherlands)

    Glaudemans, B.; Knoers, N.V.A.M.; Hoenderop, J.G.J.; Bindels, R.J.M.

    2010-01-01

    The renal distal convoluted tubule (DCT) has an essential role in maintaining systemic magnesium (Mg(2+)) concentration. The DCT is the final determinant of plasma Mg(2+) levels, as the more distal nephron segments are largely impermeable to Mg(2+). In the past decade, positional candidate strategie

  1. Comparing Local Descriptors and Bags of Visual Words to Deep Convolutional Neural Networks for Plant Recognition

    NARCIS (Netherlands)

    Pawara, Pornntiwa; Okafor, Emmanuel; Surinta, Olarik; Schomaker, Lambertus; Wiering, Marco

    2017-01-01

    The use of machine learning and computer vision methods for recognizing different plants from images has attracted lots of attention from the community. This paper aims at comparing local feature descriptors and bags of visual words with different classifiers to deep convolutional neural networks (C

  2. Convolutional neural network based sensor fusion for forward looking ground penetrating radar

    Science.gov (United States)

    Sakaguchi, Rayn; Crosskey, Miles; Chen, David; Walenz, Brett; Morton, Kenneth

    2016-05-01

    Forward looking ground penetrating radar (FLGPR) is an alternative buried threat sensing technology designed to offer additional standoff compared to downward looking GPR systems. Due to additional flexibility in antenna configurations, FLGPR systems can accommodate multiple sensor modalities on the same platform that can provide complimentary information. The different sensor modalities present challenges in both developing informative feature extraction methods, and fusing sensor information in order to obtain the best discrimination performance. This work uses convolutional neural networks in order to jointly learn features across two sensor modalities and fuse the information in order to distinguish between target and non-target regions. This joint optimization is possible by modifying the traditional image-based convolutional neural network configuration to extract data from multiple sources. The filters generated by this process create a learned feature extraction method that is optimized to provide the best discrimination performance when fused. This paper presents the results of applying convolutional neural networks and compares these results to the use of fusion performed with a linear classifier. This paper also compares performance between convolutional neural networks architectures to show the benefit of fusing the sensor information in different ways.

  3. Yetter-Drinfel'd模与卷积Hopf模%Yetter-Drinfel'd Module and Convolution Module

    Institute of Scientific and Technical Information of China (English)

    张良云; 王栓宏

    2002-01-01

    In this paper, we first give a sufficient and necessary condition for a Hopf algebra to be a Yetter-Drinfe]'d module, and prove that the finite dual of a YetterDrinfel'd module is still a Yetter-Drinfel'd module. Finally, we introduce a concept of convolution module.

  4. A generalized recursive convolution method for time-domain propagation in porous media.

    Science.gov (United States)

    Dragna, Didier; Pineau, Pierre; Blanc-Benon, Philippe

    2015-08-01

    An efficient numerical method, referred to as the auxiliary differential equation (ADE) method, is proposed to compute convolutions between relaxation functions and acoustic variables arising in sound propagation equations in porous media. For this purpose, the relaxation functions are approximated in the frequency domain by rational functions. The time variation of the convolution is thus governed by first-order differential equations which can be straightforwardly solved. The accuracy of the method is first investigated and compared to that of recursive convolution methods. It is shown that, while recursive convolution methods are first or second-order accurate in time, the ADE method does not introduce any additional error. The ADE method is then applied for outdoor sound propagation using the equations proposed by Wilson et al. in the ground [(2007). Appl. Acoust. 68, 173-200]. A first one-dimensional case is performed showing that only five poles are necessary to accurately approximate the relaxation functions for typical applications. Finally, the ADE method is used to compute sound propagation in a three-dimensional geometry over an absorbing ground. Results obtained with Wilson's equations are compared to those obtained with Zwikker and Kosten's equations and with an impedance surface for different flow resistivities.

  5. Continuous Degradation of Chitosan in a Convoluted Fibrous Bed Bioreactor with Immobilized Trichoderma reesei

    Institute of Scientific and Technical Information of China (English)

    WUMianbin; XIALiming; 等

    2002-01-01

    Continuous hydrolysis of chitosan was performed in a convoluted fibrous bed bioreactor (CFBB) with immobilized T. reesei. At dilution rate of 0.4d-1 and substrate concentration of 2% (mass vs. volume), the average degree of polymerization of hydrolysate can be kept at 1.25-1.35, which can be easily regulated by changing dilution rate or inlet chitosan concentration.

  6. A new family of windows--convolution windows and their applications

    Institute of Scientific and Technical Information of China (English)

    ZHANG; Jieqiu; LIANG; Changhong; CHEN; Yanpu

    2005-01-01

    A new family of windows is constructed by convolutions via a few rectangular windows with same time width and is thus referred to as convolution windows. The expressions of the second-order up to the eighth-order convolution windows in both the time and frequency domains are derived. Their applications in high accuracy harmonic analysis of periodic signals are investigated. Comparisons between the proposed windows and some known windows with the same width shows that, when the synchronous deviation of data sampling is slight, the proposed ones have the least effect of spectral leakage. Therefore, the new windows are well suited for high accuracy harmonic analysis and parameter estimation for periodic signals. The error analysis and computer simulations show that the estimation errors, corresponding to frequency,amplitude and phase of every harmonic component of a signal, are proportional to thepth power of the relative frequency deviation in case of the pth-order convolution window is applied to windowing signal of approximately p cycles. By introducing real time adjustment in sampling interval, the proposed algorithm can adaptively trace signal frequency and lead to less sampling synchronous deviation. The proposed approach has the advantages of easy implementation and high measure precision and can be used in harmonic analysis of quasi-periodic signals whose fundamental frequency drifts slowly with time.

  7. Inverse Problems for a Parabolic Integrodifferential Equation in a Convolutional Weak Form

    Directory of Open Access Journals (Sweden)

    Kairi Kasemets

    2013-01-01

    Full Text Available We deduce formulas for the Fréchet derivatives of cost functionals of several inverse problems for a parabolic integrodifferential equation in a weak formulation. The method consists in the application of an integrated convolutional form of the weak problem and all computations are implemented in regular Sobolev spaces.

  8. Nonlinear Trellis Description for Convolutionally Encoded Transmission Over ISI-channels with Applications for CPM

    CERN Document Server

    Schuh, Fabian

    2012-01-01

    In this paper we propose a matched decoding scheme for convolutionally encoded transmission over intersymbol interference (ISI) channels and devise a nonlinear trellis description. As an application we show that for coded continuous phase modulation (CPM) using a non-coherent receiver the number of states of the super trellis can be significantly reduced by means of a matched non-linear trellis encoder.

  9. Mapping the Relationship between Cortical Convolution and Intelligence: Effects of Gender

    Science.gov (United States)

    Luders, Eileen; Narr, Katherine L.; Bilder, Robert M.; Szeszko, Philip R.; Gurbani, Mala N.; Hamilton, Liberty; Gaser, Christian

    2008-01-01

    The pronounced convolution of the human cortex may be a morphological substrate that supports some of our species’ most distinctive cognitive abilities. Therefore, individual intelligence within humans might be modulated by the degree of folding in certain cortical regions. We applied advanced methods to analyze cortical convolution at high spatial resolution and correlated those measurements with intelligence quotients. Within a large sample of healthy adult subjects (n = 65), we detected the most prominent correlations in the left medial hemisphere. More specifically, intelligence scores were positively associated with the degree of folding in the temporo-occipital lobe, particularly in the outermost section of the posterior cingulate gyrus (retrosplenial areas). Thus, this region might be an important contributor toward individual intelligence, either via modulating pathways to (pre)frontal regions or by serving as a location for the convergence of information. Prominent gender differences within the right frontal cortex were observed; females showed uncorrected significant positive correlations and males showed a nonsignificant trend toward negative correlations. It is possible that formerly described gender differences in regional convolution are associated with differences in the underlying architecture. This might lead to the development of sexually dimorphic information processing strategies and affect the relationship between intelligence and cortical convolution. PMID:18089578

  10. Off-resonance artifacts correction with convolution in k-space (ORACLE).

    Science.gov (United States)

    Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne

    2012-06-01

    Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.

  11. Producing data-based sensitivity kernels from convolution and correlation in exploration geophysics.

    Science.gov (United States)

    Chmiel, M. J.; Roux, P.; Herrmann, P.; Rondeleux, B.

    2016-12-01

    Many studies have shown that seismic interferometry can be used to estimate surface wave arrivals by correlation of seismic signals recorded at a pair of locations. In the case of ambient noise sources, the convergence towards the surface wave Green's functions is obtained with the criterion of equipartitioned energy. However, seismic acquisition with active, controlled sources gives more possibilities when it comes to interferometry. The use of controlled sources makes it possible to recover the surface wave Green's function between two points using either correlation or convolution. We investigate the convolutional and correlational approaches using land active-seismic data from exploration geophysics. The data were recorded on 10,710 vertical receivers using 51,808 sources (seismic vibrator trucks). The sources spacing is the same in both X and Y directions (30 m) which is known as a "carpet shooting". The receivers are placed in parallel lines with a spacing 150 m in the X direction and 30 m in the Y direction. Invoking spatial reciprocity between sources and receivers, correlation and convolution functions can thus be constructed between either pairs of receivers or pairs of sources. Benefiting from the dense acquisition, we extract sensitivity kernels from correlation and convolution measurements of the seismic data. These sensitivity kernels are subsequently used to produce phase-velocity dispersion curves between two points and to separate the higher mode from the fundamental mode for surface waves. Potential application to surface wave cancellation is also envisaged.

  12. New molecular players facilitating Mg(2+) reabsorption in the distal convoluted tubule.

    NARCIS (Netherlands)

    Glaudemans, B.; Knoers, N.V.A.M.; Hoenderop, J.G.J.; Bindels, R.J.M.

    2010-01-01

    The renal distal convoluted tubule (DCT) has an essential role in maintaining systemic magnesium (Mg(2+)) concentration. The DCT is the final determinant of plasma Mg(2+) levels, as the more distal nephron segments are largely impermeable to Mg(2+). In the past decade, positional candidate strategie

  13. Dissociative electron transfer in polychlorinated aromatics. Reduction potentials from convolution analysis and quantum chemical calculations.

    Science.gov (United States)

    Romańczyk, Piotr P; Rotko, Grzegorz; Kurek, Stefan S

    2016-08-10

    Formal potentials of the first reduction leading to dechlorination in dimethylformamide were obtained from convolution analysis of voltammetric data and confirmed by quantum chemical calculations for a series of polychlorinated benzenes: hexachlorobenzene (-2.02 V vs. Fc(+)/Fc), pentachloroanisole (-2.14 V), and 2,4-dichlorophenoxy- and 2,4,5-trichlorophenoxyacetic acids (-2.35 V and -2.34 V, respectively). The key parameters required to calculate the reduction potential, electron affinity and/or C-Cl bond dissociation energy, were computed at both DFT-D and CCSD(T)-F12 levels. Comparison of the obtained gas-phase energies and redox potentials with experiment enabled us to verify the relative energetics and the performance of various implicit solvent models. Good agreement with the experiment was achieved for redox potentials computed at the DFT-D level, but only for the stepwise mechanism owing to the error compensation. For the concerted electron transfer/C-Cl bond cleavage process, the application of a high level coupled cluster method is required. Quantum chemical calculations have also demonstrated the significant role of the π*ring and σ*C-Cl orbital mixing. It brings about the stabilisation of the non-planar, C2v-symmetric C6Cl6˙(-) radical anion, explains the experimentally observed low energy barrier and the transfer coefficient close to 0.5 for C6Cl5OCH3 in an electron transfer process followed by immediate C-Cl bond cleavage in solution, and an increase in the probability of dechlorination of di- and trichlorophenoxyacetic acids due to substantial population of the vibrational excited states corresponding to the out-of-plane C-Cl bending at ambient temperatures.

  14. Lung nodule malignancy prediction using multi-task convolutional neural network

    Science.gov (United States)

    Li, Xiuli; Kao, Yueying; Shen, Wei; Li, Xiang; Xie, Guotong

    2017-03-01

    In this paper, we investigated the problem of diagnostic lung nodule malignancy prediction using thoracic Computed Tomography (CT) screening. Unlike most existing studies classify the nodules into two types benign and malignancy, we interpreted the nodule malignancy prediction as a regression problem to predict continuous malignancy level. We proposed a joint multi-task learning algorithm using Convolutional Neural Network (CNN) to capture nodule heterogeneity by extracting discriminative features from alternatingly stacked layers. We trained a CNN regression model to predict the nodule malignancy, and designed a multi-task learning mechanism to simultaneously share knowledge among 9 different nodule characteristics (Subtlety, Calcification, Sphericity, Margin, Lobulation, Spiculation, Texture, Diameter and Malignancy), and improved the final prediction result. Each CNN would generate characteristic-specific feature representations, and then we applied multi-task learning on the features to predict the corresponding likelihood for that characteristic. We evaluated the proposed method on 2620 nodules CT scans from LIDC-IDRI dataset with the 5-fold cross validation strategy. The multitask CNN regression result for regression RMSE and mapped classification ACC were 0.830 and 83.03%, while the results for single task regression RMSE 0.894 and mapped classification ACC 74.9%. Experiments show that the proposed method could predict the lung nodule malignancy likelihood effectively and outperforms the state-of-the-art methods. The learning framework could easily be applied in other anomaly likelihood prediction problem, such as skin cancer and breast cancer. It demonstrated the possibility of our method facilitating the radiologists for nodule staging assessment and individual therapeutic planning.

  15. Left ventricle segmentation in cardiac MRI images using fully convolutional neural networks

    Science.gov (United States)

    Vázquez Romaguera, Liset; Costa, Marly Guimarães Fernandes; Romero, Francisco Perdigón; Costa Filho, Cicero Ferreira Fernandes

    2017-03-01

    According to the World Health Organization, cardiovascular diseases are the leading cause of death worldwide, accounting for 17.3 million deaths per year, a number that is expected to grow to more than 23.6 million by 2030. Most cardiac pathologies involve the left ventricle; therefore, estimation of several functional parameters from a previous segmentation of this structure can be helpful in diagnosis. Manual delineation is a time consuming and tedious task that is also prone to high intra and inter-observer variability. Thus, there exists a need for automated cardiac segmentation method to help facilitate the diagnosis of cardiovascular diseases. In this work we propose a deep fully convolutional neural network architecture to address this issue and assess its performance. The model was trained end to end in a supervised learning stage from whole cardiac MRI images input and ground truth to make a per pixel classification. For its design, development and experimentation was used Caffe deep learning framework over an NVidia Quadro K4200 Graphics Processing Unit. The net architecture is: Conv64-ReLU (2x) - MaxPooling - Conv128-ReLU (2x) - MaxPooling - Conv256-ReLU (2x) - MaxPooling - Conv512-ReLu-Dropout (2x) - Conv2-ReLU - Deconv - Crop - Softmax. Training and testing processes were carried out using 5-fold cross validation with short axis cardiac magnetic resonance images from Sunnybrook Database. We obtained a Dice score of 0.92 and 0.90, Hausdorff distance of 4.48 and 5.43, Jaccard index of 0.97 and 0.97, sensitivity of 0.92 and 0.90 and specificity of 0.99 and 0.99, overall mean values with SGD and RMSProp, respectively.

  16. Verification of dose profiles generated by the convolution algorithm of the gamma knife(®) radiosurgery planning system.

    Science.gov (United States)

    Chung, Hyun-Tai; Park, Jeong-Hoon; Chun, Kook Jin

    2017-09-01

    A convolution algorithm that takes into account electron-density inhomogeneity was recently introduced to calculate dose distributions for the Gamma Knife (GK) Perfexion™ treatment planning program. The accuracies of the dose distributions computed using the convolution method were assessed using an anthropomorphic phantom and film dosimetry. Absorbed-dose distributions inside a phantom (CIRS Radiosurgery Head Phantom, Model 605) were calculated using the convolution method of the GK treatment-planning software (Leksell Gamma Plan(®) version 10.1; LGP) for various combinations of collimator size, location, direction of calculation plane, and number of shots. Computed tomography (CT) images of the phantom and a data set of CT number versus electron density were provided to the LGP. Calculated distributions were exported as digital-image communications in medicine-radiation therapy (DICOM-RT) files. Three types of radiochromic film (GafChromic(®) MD-V2-55, MD-V3, and EBT2) were irradiated inside the phantom using GK Perfexion™. Scanned images of the measured films were processed following standard radiochromic film-handling procedures. For a two-dimensional quantitative evaluation, gamma index pass rates (GIPRs) and normalized agreement-test indices (NATIs) were obtained. Image handling and index calculations were performed using a commercial software package (DoseLab Pro version 6.80). The film-dose calibration data were well fitted with third-order polynomials (R(2)  ≥ 0.9993). The mean GIPR and NATI of the 93 analyzed films were 99.3 ± 1.1% and 0.8 ± 1.3, respectively, using 3%/1.0 mm criteria. The calculated maximum doses were 4.3 ± 1.7% higher than the measured values for the 4 mm single shots and 1.8 ± 0.7% greater than those for the 8 mm single shots, whereas differences of only 0.3 ± 0.9% were observed for the 16 mm single shots. The accuracy of the calculated distribution was not statistically related to the collimator size, number

  17. Application of Convolution Neural Network to the forecasts of flare classification and occurrence using SOHO MDI data

    Science.gov (United States)

    Park, Eunsu; Moon, Yong-Jae

    2017-08-01

    A Convolutional Neural Network(CNN) is one of the well-known deep-learning methods in image processing and computer vision area. In this study, we apply CNN to two kinds of flare forecasting models: flare classification and occurrence. For this, we consider several pre-trained models (e.g., AlexNet, GoogLeNet, and ResNet) and customize them by changing several options such as the number of layers, activation function, and optimizer. Our inputs are the same number of SOHO)/MDI images for each flare class (None, C, M and X) at 00:00 UT from Jan 1996 to Dec 2010 (total 1600 images). Outputs are the results of daily flare forecasting for flare class and occurrence. We build, train, and test the models on TensorFlow, which is well-known machine learning software library developed by Google. Our major results from this study are as follows. First, most of the models have accuracies more than 0.7. Second, ResNet developed by Microsoft has the best accuracies : 0.86 for flare classification and 0.84 for flare occurrence. Third, the accuracies of these models vary greatly with changing parameters. We discuss several possibilities to improve the models.

  18. Implementation of Convolution Encoder and Viterbi Decoder for Constraint Length 7 and Bit Rate 1/2

    Directory of Open Access Journals (Sweden)

    Mr. Sandesh Y.M

    2013-11-01

    Full Text Available Convolutional codes are non blocking codes that can be designed to either error detecting or correcting. Convolution coding has been used in communication systems including deep space communication and wireless communication. At the receiver end the original message sequence is obtained from the received data using Viterbi decoder. It implements Viterbi Algorithm which is a maximum likelihood algorithm, based on the minimum cumulative hamming distance it decides the optimal trellis path that is most likely followed at the encoder. In this paper I present the convolution encoder and Viterbi decoder for constraint length 7 and bit rate 1/2.

  19. Mitosis detection in breast cancer pathology images by combining handcrafted and convolutional neural network features.

    Science.gov (United States)

    Wang, Haibo; Cruz-Roa, Angel; Basavanhally, Ajay; Gilmore, Hannah; Shih, Natalie; Feldman, Mike; Tomaszewski, John; Gonzalez, Fabio; Madabhushi, Anant

    2014-10-01

    Breast cancer (BCa) grading plays an important role in predicting disease aggressiveness and patient outcome. A key component of BCa grade is the mitotic count, which involves quantifying the number of cells in the process of dividing (i.e., undergoing mitosis) at a specific point in time. Currently, mitosis counting is done manually by a pathologist looking at multiple high power fields (HPFs) on a glass slide under a microscope, an extremely laborious and time consuming process. The development of computerized systems for automated detection of mitotic nuclei, while highly desirable, is confounded by the highly variable shape and appearance of mitoses. Existing methods use either handcrafted features that capture certain morphological, statistical, or textural attributes of mitoses or features learned with convolutional neural networks (CNN). Although handcrafted features are inspired by the domain and the particular application, the data-driven CNN models tend to be domain agnostic and attempt to learn additional feature bases that cannot be represented through any of the handcrafted features. On the other hand, CNN is computationally more complex and needs a large number of labeled training instances. Since handcrafted features attempt to model domain pertinent attributes and CNN approaches are largely supervised feature generation methods, there is an appeal in attempting to combine these two distinct classes of feature generation strategies to create an integrated set of attributes that can potentially outperform either class of feature extraction strategies individually. We present a cascaded approach for mitosis detection that intelligently combines a CNN model and handcrafted features (morphology, color, and texture features). By employing a light CNN model, the proposed approach is far less demanding computationally, and the cascaded strategy of combining handcrafted features and CNN-derived features enables the possibility of maximizing the performance

  20. Cascaded ensemble of convolutional neural networks and handcrafted features for mitosis detection

    Science.gov (United States)

    Wang, Haibo; Cruz-Roa, Angel; Basavanhally, Ajay; Gilmore, Hannah; Shih, Natalie; Feldman, Mike; Tomaszewski, John; Gonzalez, Fabio; Madabhushi, Anant

    2014-03-01

    Breast cancer (BCa) grading plays an important role in predicting disease aggressiveness and patient outcome. A key component of BCa grade is mitotic count, which involves quantifying the number of cells in the process of dividing (i.e. undergoing mitosis) at a specific point in time. Currently mitosis counting is done manually by a pathologist looking at multiple high power fields on a glass slide under a microscope, an extremely laborious and time consuming process. The development of computerized systems for automated detection of mitotic nuclei, while highly desirable, is confounded by the highly variable shape and appearance of mitoses. Existing methods use either handcrafted features that capture certain morphological, statistical or textural attributes of mitoses or features learned with convolutional neural networks (CNN). While handcrafted features are inspired by the domain and the particular application, the data-driven CNN models tend to be domain agnostic and attempt to learn additional feature bases that cannot be represented through any of the handcrafted features. On the other hand, CNN is computationally more complex and needs a large number of labeled training instances. Since handcrafted features attempt to model domain pertinent attributes and CNN approaches are largely unsupervised feature generation methods, there is an appeal to attempting to combine these two distinct classes of feature generation strategies to create an integrated set of attributes that can potentially outperform either class of feature extraction strategies individually. In this paper, we present a cascaded approach for mitosis detection that intelligently combines a CNN model and handcrafted features (morphology, color and texture features). By employing a light CNN model, the proposed approach is far less demanding computationally, and the cascaded strategy of combining handcrafted features and CNN-derived features enables the possibility of maximizing performance by

  1. A novel end-to-end classifier using domain transferred deep convolutional neural networks for biomedical images.

    Science.gov (United States)

    Pang, Shuchao; Yu, Zhezhou; Orgun, Mehmet A

    2017-03-01

    Highly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning. We first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works. With the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches. We propose a robust

  2. Comparação entre o pencil beam convolution algorithm e o analytical anisotropic algorithm em tumores de mama

    OpenAIRE

    Sá, Ana Cravo; Coelho, Carina Marques; Monsanto, Fátima

    2014-01-01

    Objectivo do estudo: comparar o desempenho dos algoritmos Pencil Beam Convolution (PBC) e do Analytical Anisotropic Algorithm (AAA) no planeamento do tratamento de tumores de mama com radioterapia conformacional a 3D.

  3. TopologyNet: Topology based deep convolutional and multi-task neural networks for biomolecular property predictions.(Research Article)

    National Research Council Canada - National Science Library

    Cang, Zixuan; Wei, Guowei

    2017-01-01

    .... This representation reveals hidden structure-function relationships in biomolecules. We further integrate ESPH and deep convolutional neural networks to construct a multichannel topological neural network (TopologyNet...

  4. Local dynamic range compensation for scanning electron microscope imaging system by sub-blocking multiple peak HE with convolution.

    Science.gov (United States)

    Sim, K S; Teh, V; Tey, Y C; Kho, T K

    2016-11-01

    This paper introduces new development technique to improve the Scanning Electron Microscope (SEM) image quality and we name it as sub-blocking multiple peak histogram equalization (SUB-B-MPHE) with convolution operator. By using this new proposed technique, it shows that the new modified MPHE performs better than original MPHE. In addition, the sub-blocking method consists of convolution operator which can help to remove the blocking effect for SEM images after applying this new developed technique. Hence, by using the convolution operator, it effectively removes the blocking effect by properly distributing the suitable pixel value for the whole image. Overall, the SUB-B-MPHE with convolution outperforms the rest of methods. SCANNING 38:492-501, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  5. Calculation of the reactor neutron time of flight spectrum by convolution technique

    Institute of Scientific and Technical Information of China (English)

    Cheng Jin-Xing; Ouyang Xiao-Ping; Zheng Yi; Zhang An-Hui; Ouyang Mao-Jie

    2008-01-01

    It is a very complex and tlme-consuming process to simulate the nuclear reactor neutron spectrum from the reactor core to the export channel by applying a Monte Carlo program. This paper presents a new method to calculate the neutron spectrum by using the convolution technique which considers the channel transportation as a linear system and the transportation scattering as the response function. It also applies Monte Carlo Neutron and Photon Transport Code (MCNP) to simulate the response function numerically. With the application of convolution technique to calculate thespectrum distribution from the core to the channel, the process is then much more convenient only with the simple numerical integral numeration. This saves computer time and reduces some trouble in re-writing of the MCNP program.

  6. Automatic detection of cell divisions (mitosis) in live-imaging microscopy images using Convolutional Neural Networks.

    Science.gov (United States)

    Shkolyar, Anat; Gefen, Amit; Benayahu, Dafna; Greenspan, Hayit

    2015-08-01

    We propose a semi-automated pipeline for the detection of possible cell divisions in live-imaging microscopy and the classification of these mitosis candidates using a Convolutional Neural Network (CNN). We use time-lapse images of NIH3T3 scratch assay cultures, extract patches around bright candidate regions that then undergo segmentation and binarization, followed by a classification of the binary patches into either containing or not containing cell division. The classification is performed by training a Convolutional Neural Network on a specially constructed database. We show strong results of AUC = 0.91 and F-score = 0.89, competitive with state-of-the-art methods in this field.

  7. Video-based convolutional neural networks for activity recognition from robot-centric videos

    Science.gov (United States)

    Ryoo, M. S.; Matthies, Larry

    2016-05-01

    In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.

  8. Multi-Task Learning for Food Identification and Analysis with Deep Convolutional Neural Networks

    Institute of Scientific and Technical Information of China (English)

    Xi-Jin Zhang; Yi-Fan Lu; Song-Hai Zhang

    2016-01-01

    In this paper, we proposed a multi-task system that can identify dish types, food ingredients, and cooking methods from food images with deep convolutional neural networks. We built up a dataset of 360 classes of different foods with at least 500 images for each class. To reduce the noises of the data, which was collected from the Internet, outlier images were detected and eliminated through a one-class SVM trained with deep convolutional features. We simultaneously trained a dish identifier, a cooking method recognizer, and a multi-label ingredient detector. They share a few low-level layers in the deep network architecture. The proposed framework shows higher accuracy than traditional method with handcrafted features, and the cooking method recognizer and ingredient detector can be applied to dishes which are not included in the training dataset to provide reference information for users.

  9. Efficient polar convolution based on the discrete Fourier-Bessel transform for application in computational biophotonics

    CERN Document Server

    Melchert, O; Roth, B

    2016-01-01

    We discuss efficient algorithms for the accurate forward and reverse evaluation of the discrete Fourier-Bessel transform (dFBT) as numerical tools to assist in the 2D polar convolution of two radially symmetric functions, relevant, e.g., to applications in computational biophotonics. In our survey of the numerical procedure we account for the circumstance that the objective function might result from a more complex measurement process and is, in the worst case, known on a finite sequence of coordinate values, only. We contrast the performance of the resulting algorithms with a procedure based on a straight forward numerical quadrature of the underlying integral transform and asses its efficienty for two benchmark Fourier-Bessel pairs. An application to the problem of finite-size beam-shape convolution in polar coordinates, relevant in the context of tissue optics and optoacoustics, is used to illustrate the versatility and computational efficiency of the numerical procedure.

  10. On the Convolution Equation Related to the Diamond Klein-Gordon Operator

    Directory of Open Access Journals (Sweden)

    Amphon Liangprom

    2011-01-01

    Full Text Available We study the distribution eαx(♢+m2kδ for m≥0, where (♢+m2k is the diamond Klein-Gordon operator iterated k times, δ is the Dirac delta distribution, x=(x1,x2,…,xn is a variable in ℝn, and α=(α1,α2,…,αn is a constant. In particular, we study the application of eαx(♢+m2kδ for solving the solution of some convolution equation. We find that the types of solution of such convolution equation, such as the ordinary function and the singular distribution, depend on the relationship between k and M.

  11. Finessing filter scarcity problem in face recognition via multi-fold filter convolution

    Science.gov (United States)

    Low, Cheng-Yaw; Teoh, Andrew Beng-Jin

    2017-06-01

    The deep convolutional neural networks for face recognition, from DeepFace to the recent FaceNet, demand a sufficiently large volume of filters for feature extraction, in addition to being deep. The shallow filter-bank approaches, e.g., principal component analysis network (PCANet), binarized statistical image features (BSIF), and other analogous variants, endure the filter scarcity problem that not all PCA and ICA filters available are discriminative to abstract noise-free features. This paper extends our previous work on multi-fold filter convolution (ℳ-FFC), where the pre-learned PCA and ICA filter sets are exponentially diversified by ℳ folds to instantiate PCA, ICA, and PCA-ICA offspring. The experimental results unveil that the 2-FFC operation solves the filter scarcity state. The 2-FFC descriptors are also evidenced to be superior to that of PCANet, BSIF, and other face descriptors, in terms of rank-1 identification rate (%).

  12. Detection and recognition of bridge crack based on convolutional neural network

    Directory of Open Access Journals (Sweden)

    Honggong LIU

    2016-10-01

    Full Text Available Aiming at the backward artificial visual detection status of bridge crack in China, which has a great danger coefficient, a digital and intelligent detection method of improving the diagnostic efficiency and reducing the risk coefficient is studied. Combing with machine vision and convolutional neural network technology, Raspberry Pi is used to acquire and pre-process image, and the crack image is analyzed; the processing algorithm which has the best effect in detecting and recognizing is selected; the convolutional neural network(CNN for crack classification is optimized; finally, a new intelligent crack detection method is put forward. The experimental result shows that the system can find all cracks beyond the maximum limit, and effectively identify the type of fracture, and the recognition rate is above 90%. The study provides reference data for engineering detection.

  13. Brain tumor grading based on Neural Networks and Convolutional Neural Networks.

    Science.gov (United States)

    Yuehao Pan; Weimin Huang; Zhiping Lin; Wanzheng Zhu; Jiayin Zhou; Wong, Jocelyn; Zhongxiang Ding

    2015-08-01

    This paper studies brain tumor grading using multiphase MRI images and compares the results with various configurations of deep learning structure and baseline Neural Networks. The MRI images are used directly into the learning machine, with some combination operations between multiphase MRIs. Compared to other researches, which involve additional effort to design and choose feature sets, the approach used in this paper leverages the learning capability of deep learning machine. We present the grading performance on the testing data measured by the sensitivity and specificity. The results show a maximum improvement of 18% on grading performance of Convolutional Neural Networks based on sensitivity and specificity compared to Neural Networks. We also visualize the kernels trained in different layers and display some self-learned features obtained from Convolutional Neural Networks.

  14. Investigations on the Potential of Convolutional Neural Networks for Vehicle Classification Based on RGB and LIDAR Data

    Science.gov (United States)

    Niessner, R.; Schilling, H.; Jutzi, B.

    2017-05-01

    In recent years, there has been a significant improvement in the detection, identification and classification of objects and images using Convolutional Neural Networks. To study the potential of the Convolutional Neural Network, in this paper three approaches are investigated to train classifiers based on Convolutional Neural Networks. These approaches allow Convolutional Neural Networks to be trained on datasets containing only a few hundred training samples, which results in a successful classification. Two of these approaches are based on the concept of transfer learning. In the first approach features, created by a pretrained Convolutional Neural Network, are used for a classification using a support vector machine. In the second approach a pretrained Convolutional Neural Network gets fine-tuned on a different data set. The third approach includes the design and training for flat Convolutional Neural Networks from the scratch. The evaluation of the proposed approaches is based on a data set provided by the IEEE Geoscience and Remote Sensing Society (GRSS) which contains RGB and LiDAR data of an urban area. In this work it is shown that these Convolutional Neural Networks lead to classification results with high accuracy both on RGB and LiDAR data. Features which are derived by RGB data transferred into LiDAR data by transfer learning lead to better results in classification in contrast to RGB data. Using a neural network which contains fewer layers than common neural networks leads to the best classification results. In this framework, it can furthermore be shown that the practical application of LiDAR images results in a better data basis for classification of vehicles than the use of RGB images.

  15. An All-In-One Convolutional Neural Network for Face Analysis

    OpenAIRE

    Ranjan, Rajeev; Sankaranarayanan, Swami; Castillo, Carlos D.; Chellappa, Rama

    2016-01-01

    We present a multi-purpose algorithm for simultaneous face detection, face alignment, pose estimation, gender recognition, smile detection, age estimation and face recognition using a single deep convolutional neural network (CNN). The proposed method employs a multi-task learning framework that regularizes the shared parameters of CNN and builds a synergy among different domains and tasks. Extensive experiments show that the network has a better understanding of face and achieves state-of-th...

  16. First Steps Toward Incorporating Image Based Diagnostics Into Particle Accelerator Control Systems Using Convolutional Neural Networks

    OpenAIRE

    Edelen, A. L.; Biedron, S. G.; Milton, S. V.; Edelen, J. P.

    2016-01-01

    At present, a variety of image-based diagnostics are used in particle accelerator systems. Often times, these are viewed by a human operator who then makes appropriate adjustments to the machine. Given recent advances in using convolutional neural networks (CNNs) for image processing, it should be possible to use image diagnostics directly in control routines (NN-based or otherwise). This is especially appealing for non-intercepting diagnostics that could run continuously during beam operatio...

  17. A convolution integral representation of the thermal Sunyaev-Zel'dovich effect

    CERN Document Server

    Sandoval-Villalbazo, A

    2003-01-01

    Analytical expressions for the non-relativistic and relativistic Sunyaev-Zel'dovich effect (SZE) are derived by means of suitable convolution integrals. The establishment of these expressions is based on the fact that the SZE disturbed spectrum, at high frequencies, possesses the form of a Laplace transform of the single line distortion profile (structure factor). Implications of this description of the SZE related to light scattering in optically thin plasmas are discussed.

  18. A convolution integral representation of the thermal Sunyaev-Zel'dovich effect

    Energy Technology Data Exchange (ETDEWEB)

    Sandoval-Villalbazo, A [Departamento de Fisica y Matematicas, Universidad Iberoamericana, Lomas de Santa Fe 01210 Mexico DF (Mexico); Garcia-Colin, L S [Departamento de Fisica, Universidad Autonoma Metropolitana, Mexico DF, 09340 (Mexico)

    2003-04-25

    Analytical expressions for the non-relativistic and relativistic Sunyaev-Zel'dovich effect (SZE) are derived by means of suitable convolution integrals. The establishment of these expressions is based on the fact that the SZE disturbed spectrum, at high frequencies, possesses the form of a Laplace transform of the single line distortion profile (structure factor). Implications of this description of the SZE related to light scattering in optically thin plasmas are discussed.

  19. Deep Self-Convolutional Activations Descriptor for Dense Cross-Modal Correspondence

    OpenAIRE

    Kim, Seungryong; Min, Dongbo; Lin, Stephen; Sohn, Kwanghoon

    2016-01-01

    We present a novel descriptor, called deep self-convolutional activations (DeSCA), designed for establishing dense correspondences between images taken under different imaging modalities, such as different spectral ranges or lighting conditions. Motivated by descriptors based on local self-similarity (LSS), we formulate a novel descriptor by leveraging LSS in a deep architecture, leading to better discriminative power and greater robustness to non-rigid image deformations than state-of-the-ar...

  20. Analytic continuation of solutions of some nonlinear convolution partial differential equations

    Directory of Open Access Journals (Sweden)

    Hidetoshi Tahara

    2015-01-01

    Full Text Available The paper considers a problem of analytic continuation of solutions of some nonlinear convolution partial differential equations which naturally appear in the summability theory of formal solutions of nonlinear partial differential equations. Under a suitable assumption it is proved that any local holomorphic solution has an analytic extension to a certain sector and its extension has exponential growth when the variable goes to infinity in the sector.