WorldWideScience

Sample records for response gaussian convolution

  1. Study of asymmetry in motor areas related to handedness using the fMRI BOLD response Gaussian convolution model

    International Nuclear Information System (INIS)

    Gao Qing; Chen Huafu; Gong Qiyong

    2009-01-01

    Brain asymmetry is a phenomenon well known for handedness, and has been studied in the motor cortex. However, few studies have quantitatively assessed the asymmetrical cortical activities for handedness in motor areas. In the present study, we systematically and quantitatively investigated asymmetry in the left and right primary motor cortices during sequential finger movements using the Gaussian convolution model approach based on the functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) response. Six right-handed and six left-handed subjects were recruited to perform three types of hand movement tasks. The results for the expected value of the Gaussian convolution model showed that it took the dominant hand a longer average interval of response delay regardless of the handedness and bi- or uni-manual performance. The results for the standard deviation of the Gaussian model suggested that in the mass neurons, these intervals of the dominant hand were much more variable than those of the non-dominant hand. When comparing bi-manual movement conditions with uni-manual movement conditions in the primary motor cortex (PMC), both the expected value and standard deviation in the Gaussian function were significantly smaller (p < 0.05) in the bi-manual conditions, showing that the movement of the non-dominant hand influenced that of the dominant hand.

  2. Study of asymmetry in motor areas related to handedness using the fMRI BOLD response Gaussian convolution model

    Energy Technology Data Exchange (ETDEWEB)

    Gao Qing [School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054 (China); School of Applied Mathematics, University of Electronic Science and Technology of China, Chengdu 610054 (China); Chen Huafu [School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054 (China); School of Applied Mathematics, University of Electronic Science and Technology of China, Chengdu 610054 (China)], E-mail: Chenhf@uestc.edu.cn; Gong Qiyong [Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu 610041 (China)

    2009-10-30

    Brain asymmetry is a phenomenon well known for handedness, and has been studied in the motor cortex. However, few studies have quantitatively assessed the asymmetrical cortical activities for handedness in motor areas. In the present study, we systematically and quantitatively investigated asymmetry in the left and right primary motor cortices during sequential finger movements using the Gaussian convolution model approach based on the functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) response. Six right-handed and six left-handed subjects were recruited to perform three types of hand movement tasks. The results for the expected value of the Gaussian convolution model showed that it took the dominant hand a longer average interval of response delay regardless of the handedness and bi- or uni-manual performance. The results for the standard deviation of the Gaussian model suggested that in the mass neurons, these intervals of the dominant hand were much more variable than those of the non-dominant hand. When comparing bi-manual movement conditions with uni-manual movement conditions in the primary motor cortex (PMC), both the expected value and standard deviation in the Gaussian function were significantly smaller (p < 0.05) in the bi-manual conditions, showing that the movement of the non-dominant hand influenced that of the dominant hand.

  3. The Gaussian streaming model and convolution Lagrangian effective field theory

    Energy Technology Data Exchange (ETDEWEB)

    Vlah, Zvonimir [Stanford Institute for Theoretical Physics and Department of Physics, Stanford University, Stanford, CA 94306 (United States); Castorina, Emanuele; White, Martin, E-mail: zvlah@stanford.edu, E-mail: ecastorina@berkeley.edu, E-mail: mwhite@berkeley.edu [Department of Physics, University of California, Berkeley, CA 94720 (United States)

    2016-12-01

    We update the ingredients of the Gaussian streaming model (GSM) for the redshift-space clustering of biased tracers using the techniques of Lagrangian perturbation theory, effective field theory (EFT) and a generalized Lagrangian bias expansion. After relating the GSM to the cumulant expansion, we present new results for the real-space correlation function, mean pairwise velocity and pairwise velocity dispersion including counter terms from EFT and bias terms through third order in the linear density, its leading derivatives and its shear up to second order. We discuss the connection to the Gaussian peaks formalism. We compare the ingredients of the GSM to a suite of large N-body simulations, and show the performance of the theory on the low order multipoles of the redshift-space correlation function and power spectrum. We highlight the importance of a general biasing scheme, which we find to be as important as higher-order corrections due to non-linear evolution for the halos we consider on the scales of interest to us.

  4. Exact analytical solution of the convolution integral equation for a general profile fitting function and Gaussian detector kernel

    International Nuclear Information System (INIS)

    Garcia-Vicente, F.; Rodriguez, C.

    2000-01-01

    One of the most important aspects in the metrology of radiation fields is the problem of the measurement of dose profiles in regions where the dose gradient is large. In such zones, the 'detector size effect' may produce experimental measurements that do not correspond to reality. Mathematically it can be proved, under some general assumptions of spatial linearity, that the disturbance induced in the measurement by the effect of the finite size of the detector is equal to the convolution of the real profile with a representative kernel of the detector. In this work the exact relation between the measured profile and the real profile is shown, through the analytical resolution of the integral equation for a general type of profile fitting function using Gaussian convolution kernels. (author)

  5. Equivalent non-Gaussian excitation method for response moment calculation of systems under non-Gaussian random excitation

    International Nuclear Information System (INIS)

    Tsuchida, Takahiro; Kimura, Koji

    2015-01-01

    Equivalent non-Gaussian excitation method is proposed to obtain the moments up to the fourth order of the response of systems under non-Gaussian random excitation. The excitation is prescribed by the probability density and power spectrum. Moment equations for the response can be derived from the stochastic differential equations for the excitation and the system. However, the moment equations are not closed due to the nonlinearity of the diffusion coefficient in the equation for the excitation. In the proposed method, the diffusion coefficient is replaced with the equivalent diffusion coefficient approximately to obtain a closed set of the moment equations. The square of the equivalent diffusion coefficient is expressed by the second-order polynomial. In order to demonstrate the validity of the method, a linear system to non-Gaussian excitation with generalized Gaussian distribution is analyzed. The results show the method is applicable to non-Gaussian excitation with the widely different kurtosis and bandwidth. (author)

  6. Response moments of dynamic systems under non-Gaussian random excitation by the equivalent non-Gaussian excitation method

    International Nuclear Information System (INIS)

    Tsuchida, Takahiro; Kimura, Koji

    2016-01-01

    Equivalent non-Gaussian excitation method is proposed to obtain the response moments up to the 4th order of dynamic systems under non-Gaussian random excitation. The non-Gaussian excitation is prescribed by the probability density and the power spectrum, and is described by an Ito stochastic differential equation. Generally, moment equations for the response, which are derived from the governing equations for the excitation and the system, are not closed due to the nonlinearity of the diffusion coefficient in the equation for the excitation even though the system is linear. In the equivalent non-Gaussian excitation method, the diffusion coefficient is replaced with the equivalent diffusion coefficient approximately to obtain a closed set of the moment equations. The square of the equivalent diffusion coefficient is expressed by a quadratic polynomial. In numerical examples, a linear system subjected to nonGaussian excitations with bimodal and Rayleigh distributions is analyzed by using the present method. The results show that the method yields the variance, skewness and kurtosis of the response with high accuracy for non-Gaussian excitation with the widely different probability densities and bandwidth. The statistical moments of the equivalent non-Gaussian excitation are also investigated to describe the feature of the method. (paper)

  7. Response of MDOF strongly nonlinear systems to fractional Gaussian noises.

    Science.gov (United States)

    Deng, Mao-Lin; Zhu, Wei-Qiu

    2016-08-01

    In the present paper, multi-degree-of-freedom strongly nonlinear systems are modeled as quasi-Hamiltonian systems and the stochastic averaging method for quasi-Hamiltonian systems (including quasi-non-integrable, completely integrable and non-resonant, completely integrable and resonant, partially integrable and non-resonant, and partially integrable and resonant Hamiltonian systems) driven by fractional Gaussian noise is introduced. The averaged fractional stochastic differential equations (SDEs) are derived. The simulation results for some examples show that the averaged SDEs can be used to predict the response of the original systems and the simulation time for the averaged SDEs is less than that for the original systems.

  8. Response of MDOF strongly nonlinear systems to fractional Gaussian noises

    International Nuclear Information System (INIS)

    Deng, Mao-Lin; Zhu, Wei-Qiu

    2016-01-01

    In the present paper, multi-degree-of-freedom strongly nonlinear systems are modeled as quasi-Hamiltonian systems and the stochastic averaging method for quasi-Hamiltonian systems (including quasi-non-integrable, completely integrable and non-resonant, completely integrable and resonant, partially integrable and non-resonant, and partially integrable and resonant Hamiltonian systems) driven by fractional Gaussian noise is introduced. The averaged fractional stochastic differential equations (SDEs) are derived. The simulation results for some examples show that the averaged SDEs can be used to predict the response of the original systems and the simulation time for the averaged SDEs is less than that for the original systems.

  9. Response of MDOF strongly nonlinear systems to fractional Gaussian noises

    Energy Technology Data Exchange (ETDEWEB)

    Deng, Mao-Lin; Zhu, Wei-Qiu, E-mail: wqzhu@zju.edu.cn [Department of Mechanics, State Key Laboratory of Fluid Power and Mechatronic Systems, Key Laboratory of Soft Machines and Smart Devices of Zhejiang Province, Zhejiang University, Hangzhou 310027 (China)

    2016-08-15

    In the present paper, multi-degree-of-freedom strongly nonlinear systems are modeled as quasi-Hamiltonian systems and the stochastic averaging method for quasi-Hamiltonian systems (including quasi-non-integrable, completely integrable and non-resonant, completely integrable and resonant, partially integrable and non-resonant, and partially integrable and resonant Hamiltonian systems) driven by fractional Gaussian noise is introduced. The averaged fractional stochastic differential equations (SDEs) are derived. The simulation results for some examples show that the averaged SDEs can be used to predict the response of the original systems and the simulation time for the averaged SDEs is less than that for the original systems.

  10. Characterisation of random Gaussian and non-Gaussian stress processes in terms of extreme responses

    Directory of Open Access Journals (Sweden)

    Colin Bruno

    2015-01-01

    Full Text Available In the field of military land vehicles, random vibration processes generated by all-terrain wheeled vehicles in motion are not classical stochastic processes with a stationary and Gaussian nature. Non-stationarity of processes induced by the variability of the vehicle speed does not form a major difficulty because the designer can have good control over the vehicle speed by characterising the histogram of instantaneous speed of the vehicle during an operational situation. Beyond this non-stationarity problem, the hard point clearly lies in the fact that the random processes are not Gaussian and are generated mainly by the non-linear behaviour of the undercarriage and the strong occurrence of shocks generated by roughness of the terrain. This non-Gaussian nature is expressed particularly by very high flattening levels that can affect the design of structures under extreme stresses conventionally acquired by spectral approaches, inherent to Gaussian processes and based essentially on spectral moments of stress processes. Due to these technical considerations, techniques for characterisation of random excitation processes generated by this type of carrier need to be changed, by proposing innovative characterisation methods based on time domain approaches as described in the body of the text rather than spectral domain approaches.

  11. Multiple Response Regression for Gaussian Mixture Models with Known Labels.

    Science.gov (United States)

    Lee, Wonyul; Du, Ying; Sun, Wei; Hayes, D Neil; Liu, Yufeng

    2012-12-01

    Multiple response regression is a useful regression technique to model multiple response variables using the same set of predictor variables. Most existing methods for multiple response regression are designed for modeling homogeneous data. In many applications, however, one may have heterogeneous data where the samples are divided into multiple groups. Our motivating example is a cancer dataset where the samples belong to multiple cancer subtypes. In this paper, we consider modeling the data coming from a mixture of several Gaussian distributions with known group labels. A naive approach is to split the data into several groups according to the labels and model each group separately. Although it is simple, this approach ignores potential common structures across different groups. We propose new penalized methods to model all groups jointly in which the common and unique structures can be identified. The proposed methods estimate the regression coefficient matrix, as well as the conditional inverse covariance matrix of response variables. Asymptotic properties of the proposed methods are explored. Through numerical examples, we demonstrate that both estimation and prediction can be improved by modeling all groups jointly using the proposed methods. An application to a glioblastoma cancer dataset reveals some interesting common and unique gene relationships across different cancer subtypes.

  12. Technical Note: Impact of the geometry dependence of the ion chamber detector response function on a convolution-based method to address the volume averaging effect

    Energy Technology Data Exchange (ETDEWEB)

    Barraclough, Brendan; Lebron, Sharon [Department of Radiation Oncology, University of Florida, Gainesville, Florida 32608 and J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611 (United States); Li, Jonathan G.; Fan, Qiyong; Liu, Chihray; Yan, Guanghua, E-mail: yangua@shands.ufl.edu [Department of Radiation Oncology, University of Florida, Gainesville, Florida 32608 (United States)

    2016-05-15

    Purpose: To investigate the geometry dependence of the detector response function (DRF) of three commonly used scanning ionization chambers and its impact on a convolution-based method to address the volume averaging effect (VAE). Methods: A convolution-based approach has been proposed recently to address the ionization chamber VAE. It simulates the VAE in the treatment planning system (TPS) by iteratively convolving the calculated beam profiles with the DRF while optimizing the beam model. Since the convolved and the measured profiles are subject to the same VAE, the calculated profiles match the implicit “real” ones when the optimization converges. Three DRFs (Gaussian, Lorentzian, and parabolic function) were used for three ionization chambers (CC04, CC13, and SNC125c) in this study. Geometry dependent/independent DRFs were obtained by minimizing the difference between the ionization chamber-measured profiles and the diode-measured profiles convolved with the DRFs. These DRFs were used to obtain eighteen beam models for a commercial TPS. Accuracy of the beam models were evaluated by assessing the 20%–80% penumbra width difference (PWD) between the computed and diode-measured beam profiles. Results: The convolution-based approach was found to be effective for all three ionization chambers with significant improvement for all beam models. Up to 17% geometry dependence of the three DRFs was observed for the studied ionization chambers. With geometry dependent DRFs, the PWD was within 0.80 mm for the parabolic function and CC04 combination and within 0.50 mm for other combinations; with geometry independent DRFs, the PWD was within 1.00 mm for all cases. When using the Gaussian function as the DRF, accounting for geometry dependence led to marginal improvement (PWD < 0.20 mm) for CC04; the improvement ranged from 0.38 to 0.65 mm for CC13; for SNC125c, the improvement was slightly above 0.50 mm. Conclusions: Although all three DRFs were found adequate to

  13. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network

    Science.gov (United States)

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-01-01

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy. PMID:29231868

  14. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network.

    Science.gov (United States)

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-12-12

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy.

  15. Predicting Response to Neoadjuvant Chemotherapy with PET Imaging Using Convolutional Neural Networks.

    Directory of Open Access Journals (Sweden)

    Petros-Pavlos Ypsilantis

    Full Text Available Imaging of cancer with 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET has become a standard component of diagnosis and staging in oncology, and is becoming more important as a quantitative monitor of individual response to therapy. In this article we investigate the challenging problem of predicting a patient's response to neoadjuvant chemotherapy from a single 18F-FDG PET scan taken prior to treatment. We take a "radiomics" approach whereby a large amount of quantitative features is automatically extracted from pretherapy PET images in order to build a comprehensive quantification of the tumor phenotype. While the dominant methodology relies on hand-crafted texture features, we explore the potential of automatically learning low- to high-level features directly from PET scans. We report on a study that compares the performance of two competing radiomics strategies: an approach based on state-of-the-art statistical classifiers using over 100 quantitative imaging descriptors, including texture features as well as standardized uptake values, and a convolutional neural network, 3S-CNN, trained directly from PET scans by taking sets of adjacent intra-tumor slices. Our experimental results, based on a sample of 107 patients with esophageal cancer, provide initial evidence that convolutional neural networks have the potential to extract PET imaging representations that are highly predictive of response to therapy. On this dataset, 3S-CNN achieves an average 80.7% sensitivity and 81.6% specificity in predicting non-responders, and outperforms other competing predictive models.

  16. The neuronal response to electrical constant-amplitude pulse train stimulation: additive Gaussian noise.

    Science.gov (United States)

    Matsuoka, A J; Abbas, P J; Rubinstein, J T; Miller, C A

    2000-11-01

    Experimental results from humans and animals show that electrically evoked compound action potential (EAP) responses to constant-amplitude pulse train stimulation can demonstrate an alternating pattern, due to the combined effects of highly synchronized responses to electrical stimulation and refractory effects (Wilson et al., 1994). One way to improve signal representation is to reduce the level of across-fiber synchrony and hence, the level of the amplitude alternation. To accomplish this goal, we have examined EAP responses in the presence of Gaussian noise added to the pulse train stimulus. Addition of Gaussian noise at a level approximately -30 dB relative to EAP threshold to the pulse trains decreased the amount of alternation, indicating that stochastic resonance may be induced in the auditory nerve. The use of some type of conditioning stimulus such as Gaussian noise may provide a more 'normal' neural response pattern.

  17. Convolution based profile fitting

    International Nuclear Information System (INIS)

    Kern, A.; Coelho, A.A.; Cheary, R.W.

    2002-01-01

    Full text: In convolution based profile fitting, profiles are generated by convoluting functions together to form the observed profile shape. For a convolution of 'n' functions this process can be written as, Y(2θ)=F 1 (2θ)x F 2 (2θ)x... x F i (2θ)x....xF n (2θ). In powder diffractometry the functions F i (2θ) can be interpreted as the aberration functions of the diffractometer, but in general any combination of appropriate functions for F i (2θ) may be used in this context. Most direct convolution fitting methods are restricted to combinations of F i (2θ) that can be convoluted analytically (e.g. GSAS) such as Lorentzians, Gaussians, the hat (impulse) function and the exponential function. However, software such as TOPAS is now available that can accurately convolute and refine a wide variety of profile shapes numerically, including user defined profiles, without the need to convolute analytically. Some of the most important advantages of modern convolution based profile fitting are: 1) virtually any peak shape and angle dependence can normally be described using minimal profile parameters in laboratory and synchrotron X-ray data as well as in CW and TOF neutron data. This is possible because numerical convolution and numerical differentiation is used within the refinement procedure so that a wide range of functions can easily be incorporated into the convolution equation; 2) it can use physically based diffractometer models by convoluting the instrument aberration functions. This can be done for most laboratory based X-ray powder diffractometer configurations including conventional divergent beam instruments, parallel beam instruments, and diffractometers used for asymmetric diffraction. It can also accommodate various optical elements (e.g. multilayers and monochromators) and detector systems (e.g. point and position sensitive detectors) and has already been applied to neutron powder diffraction systems (e.g. ANSTO) as well as synchrotron based

  18. Approximate bandpass and frequency response models of the difference of Gaussian filter

    Science.gov (United States)

    Birch, Philip; Mitra, Bhargav; Bangalore, Nagachetan M.; Rehman, Saad; Young, Rupert; Chatwin, Chris

    2010-12-01

    The Difference of Gaussian (DOG) filter is widely used in optics and image processing as, among other things, an edge detection and correlation filter. It has important biological applications and appears to be part of the mammalian vision system. In this paper we analyse the filter and provide details of the full width half maximum, bandwidth and frequency response in order to aid the full characterisation of its performance.

  19. Response of a Duffing—Rayleigh system with a fractional derivative under Gaussian white noise excitation

    International Nuclear Information System (INIS)

    Zhang Ran-Ran; Xu Wei; Yang Gui-Dong; Han Qun

    2015-01-01

    In this paper, we consider the response analysis of a Duffing–Rayleigh system with fractional derivative under Gaussian white noise excitation. A stochastic averaging procedure for this system is developed by using the generalized harmonic functions. First, the system state is approximated by a diffusive Markov process. Then, the stationary probability densities are derived from the averaged Itô stochastic differential equation of the system. The accuracy of the analytical results is validated by the results from the Monte Carlo simulation of the original system. Moreover, the effects of different system parameters and noise intensity on the response of the system are also discussed. (paper)

  20. A novel approach to assess the treatment response using Gaussian random field in PET

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Mengdie [Department of Biomedical Engineering, Tsinghua University, Beijing 100084, China and Center for Advanced Medical Imaging Science, Division of Nuclear Medicine and Molecular Imaging, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States); Guo, Ning [Center for Advanced Medical Imaging Science, Division of Nuclear Medicine and Molecular Imaging, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States); Hu, Guangshu; Zhang, Hui, E-mail: hzhang@mail.tsinghua.edu.cn, E-mail: li.quanzheng@mgh.harvard.edu [Department of Biomedical Engineering, Tsinghua University, Beijing 100084 (China); El Fakhri, Georges; Li, Quanzheng, E-mail: hzhang@mail.tsinghua.edu.cn, E-mail: li.quanzheng@mgh.harvard.edu [Center for Advanced Medical Imaging Science, Division of Nuclear Medicine and Molecular Imaging, Massachusetts General Hospital, Boston, Massachusetts 02114 and Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115 (United States)

    2016-02-15

    Purpose: The assessment of early therapeutic response to anticancer therapy is vital for treatment planning and patient management in clinic. With the development of personal treatment plan, the early treatment response, especially before any anatomically apparent changes after treatment, becomes urgent need in clinic. Positron emission tomography (PET) imaging serves an important role in clinical oncology for tumor detection, staging, and therapy response assessment. Many studies on therapy response involve interpretation of differences between two PET images, usually in terms of standardized uptake values (SUVs). However, the quantitative accuracy of this measurement is limited. This work proposes a statistically robust approach for therapy response assessment based on Gaussian random field (GRF) to provide a statistically more meaningful scale to evaluate therapy effects. Methods: The authors propose a new criterion for therapeutic assessment by incorporating image noise into traditional SUV method. An analytical method based on the approximate expressions of the Fisher information matrix was applied to model the variance of individual pixels in reconstructed images. A zero mean unit variance GRF under the null hypothesis (no response to therapy) was obtained by normalizing each pixel of the post-therapy image with the mean and standard deviation of the pretherapy image. The performance of the proposed method was evaluated by Monte Carlo simulation, where XCAT phantoms (128{sup 2} pixels) with lesions of various diameters (2–6 mm), multiple tumor-to-background contrasts (3–10), and different changes in intensity (6.25%–30%) were used. The receiver operating characteristic curves and the corresponding areas under the curve were computed for both the proposed method and the traditional methods whose figure of merit is the percentage change of SUVs. The formula for the false positive rate (FPR) estimation was developed for the proposed therapy response

  1. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  2. Fast Convolution Module (Fast Convolution Module)

    National Research Council Canada - National Science Library

    Bierens, L

    1997-01-01

    This report describes the design and realisation of a real-time range azimuth compression module, the so-called 'Fast Convolution Module', based on the fast convolution algorithm developed at TNO-FEL...

  3. Discriminative learning of receptive fields from responses to non-Gaussian stimulus ensembles.

    Science.gov (United States)

    Meyer, Arne F; Diepenbrock, Jan-Philipp; Happel, Max F K; Ohl, Frank W; Anemüller, Jörn

    2014-01-01

    Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and

  4. Discriminative learning of receptive fields from responses to non-Gaussian stimulus ensembles.

    Directory of Open Access Journals (Sweden)

    Arne F Meyer

    Full Text Available Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to

  5. Stochastic response of van der Pol oscillator with two kinds of fractional derivatives under Gaussian white noise excitation

    International Nuclear Information System (INIS)

    Yang Yong-Ge; Xu Wei; Sun Ya-Hui; Gu Xu-Dong

    2016-01-01

    This paper aims to investigate the stochastic response of the van der Pol (VDP) oscillator with two kinds of fractional derivatives under Gaussian white noise excitation. First, the fractional VDP oscillator is replaced by an equivalent VDP oscillator without fractional derivative terms by using the generalized harmonic balance technique. Then, the stochastic averaging method is applied to the equivalent VDP oscillator to obtain the analytical solution. Finally, the analytical solutions are validated by numerical results from the Monte Carlo simulation of the original fractional VDP oscillator. The numerical results not only demonstrate the accuracy of the proposed approach but also show that the fractional order, the fractional coefficient and the intensity of Gaussian white noise play important roles in the responses of the fractional VDP oscillator. An interesting phenomenon we found is that the effects of the fractional order of two kinds of fractional derivative items on the fractional stochastic systems are totally contrary. (paper)

  6. Modelling Inverse Gaussian Data with Censored Response Values: EM versus MCMC

    Directory of Open Access Journals (Sweden)

    R. S. Sparks

    2011-01-01

    Full Text Available Low detection limits are common in measure environmental variables. Building models using data containing low or high detection limits without adjusting for the censoring produces biased models. This paper offers approaches to estimate an inverse Gaussian distribution when some of the data used are censored because of low or high detection limits. Adjustments for the censoring can be made if there is between 2% and 20% censoring using either the EM algorithm or MCMC. This paper compares these approaches.

  7. DCMDN: Deep Convolutional Mixture Density Network

    Science.gov (United States)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  8. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed

    2018-04-08

    Convolutional Sparse Coding (CSC) is a well-established image representation model especially suited for image restoration tasks. In this work, we extend the applicability of this model by proposing a supervised approach to convolutional sparse coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements to be discriminative. Experimental results show that using supervised convolutional learning results in two key advantages. First, we learn more semantically relevant filters in the dictionary and second, we achieve improved image reconstruction on unseen data.

  9. Consistent haul road condition monitoring by means of vehicle response normalisation with Gaussian processes

    CSIR Research Space (South Africa)

    Heyns, T

    2012-12-01

    Full Text Available Suboptimal haul road management policies such as routine, periodic and urgent maintenance may result in unnecessary cost, both to roads and vehicles. A recent idea is to continually access haul road condition based on measured vehicle response...

  10. Decoding of visual activity patterns from fMRI responses using multivariate pattern analyses and convolutional neural network.

    Science.gov (United States)

    Zafar, Raheel; Kamel, Nidal; Naufal, Mohamad; Malik, Aamir Saeed; Dass, Sarat C; Ahmad, Rana Fayyaz; Abdullah, Jafri M; Reza, Faruque

    2017-01-01

    Decoding of human brain activity has always been a primary goal in neuroscience especially with functional magnetic resonance imaging (fMRI) data. In recent years, Convolutional neural network (CNN) has become a popular method for the extraction of features due to its higher accuracy, however it needs a lot of computation and training data. In this study, an algorithm is developed using Multivariate pattern analysis (MVPA) and modified CNN to decode the behavior of brain for different images with limited data set. Selection of significant features is an important part of fMRI data analysis, since it reduces the computational burden and improves the prediction performance; significant features are selected using t-test. MVPA uses machine learning algorithms to classify different brain states and helps in prediction during the task. General linear model (GLM) is used to find the unknown parameters of every individual voxel and the classification is done using multi-class support vector machine (SVM). MVPA-CNN based proposed algorithm is compared with region of interest (ROI) based method and MVPA based estimated values. The proposed method showed better overall accuracy (68.6%) compared to ROI (61.88%) and estimation values (64.17%).

  11. Stochastic responses of Van der Pol vibro-impact system with fractional derivative damping excited by Gaussian white noise

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Yanwen; Xu, Wei, E-mail: weixu@nwpu.edu.cn; Wang, Liang [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)

    2016-03-15

    This paper focuses on the study of the stochastic Van der Pol vibro-impact system with fractional derivative damping under Gaussian white noise excitation. The equations of the original system are simplified by non-smooth transformation. For the simplified equation, the stochastic averaging approach is applied to solve it. Then, the fractional derivative damping term is facilitated by a numerical scheme, therewith the fourth-order Runge-Kutta method is used to obtain the numerical results. And the numerical simulation results fit the analytical solutions. Therefore, the proposed analytical means to study this system are proved to be feasible. In this context, the effects on the response stationary probability density functions (PDFs) caused by noise excitation, restitution condition, and fractional derivative damping are considered, in addition the stochastic P-bifurcation is also explored in this paper through varying the value of the coefficient of fractional derivative damping and the restitution coefficient. These system parameters not only influence the response PDFs of this system but also can cause the stochastic P-bifurcation.

  12. Stochastic responses of Van der Pol vibro-impact system with fractional derivative damping excited by Gaussian white noise.

    Science.gov (United States)

    Xiao, Yanwen; Xu, Wei; Wang, Liang

    2016-03-01

    This paper focuses on the study of the stochastic Van der Pol vibro-impact system with fractional derivative damping under Gaussian white noise excitation. The equations of the original system are simplified by non-smooth transformation. For the simplified equation, the stochastic averaging approach is applied to solve it. Then, the fractional derivative damping term is facilitated by a numerical scheme, therewith the fourth-order Runge-Kutta method is used to obtain the numerical results. And the numerical simulation results fit the analytical solutions. Therefore, the proposed analytical means to study this system are proved to be feasible. In this context, the effects on the response stationary probability density functions (PDFs) caused by noise excitation, restitution condition, and fractional derivative damping are considered, in addition the stochastic P-bifurcation is also explored in this paper through varying the value of the coefficient of fractional derivative damping and the restitution coefficient. These system parameters not only influence the response PDFs of this system but also can cause the stochastic P-bifurcation.

  13. Convolution copula econometrics

    CERN Document Server

    Cherubini, Umberto; Mulinacci, Sabrina

    2016-01-01

    This book presents a novel approach to time series econometrics, which studies the behavior of nonlinear stochastic processes. This approach allows for an arbitrary dependence structure in the increments and provides a generalization with respect to the standard linear independent increments assumption of classical time series models. The book offers a solution to the problem of a general semiparametric approach, which is given by a concept called C-convolution (convolution of dependent variables), and the corresponding theory of convolution-based copulas. Intended for econometrics and statistics scholars with a special interest in time series analysis and copula functions (or other nonparametric approaches), the book is also useful for doctoral students with a basic knowledge of copula functions wanting to learn about the latest research developments in the field.

  14. Efficient convolutional sparse coding

    Science.gov (United States)

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  15. Multithreaded implicitly dealiased convolutions

    Science.gov (United States)

    Roberts, Malcolm; Bowman, John C.

    2018-03-01

    Implicit dealiasing is a method for computing in-place linear convolutions via fast Fourier transforms that decouples work memory from input data. It offers easier memory management and, for long one-dimensional input sequences, greater efficiency than conventional zero-padding. Furthermore, for convolutions of multidimensional data, the segregation of data and work buffers can be exploited to reduce memory usage and execution time significantly. This is accomplished by processing and discarding data as it is generated, allowing work memory to be reused, for greater data locality and performance. A multithreaded implementation of implicit dealiasing that accepts an arbitrary number of input and output vectors and a general multiplication operator is presented, along with an improved one-dimensional Hermitian convolution that avoids the loop dependency inherent in previous work. An alternate data format that can accommodate a Nyquist mode and enhance cache efficiency is also proposed.

  16. Gaussian process regression analysis for functional data

    CERN Document Server

    Shi, Jian Qing

    2011-01-01

    Gaussian Process Regression Analysis for Functional Data presents nonparametric statistical methods for functional regression analysis, specifically the methods based on a Gaussian process prior in a functional space. The authors focus on problems involving functional response variables and mixed covariates of functional and scalar variables.Covering the basics of Gaussian process regression, the first several chapters discuss functional data analysis, theoretical aspects based on the asymptotic properties of Gaussian process regression models, and new methodological developments for high dime

  17. Gradient Flow Convolutive Blind Source Separation

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Nielsen, Chinton Møller

    2004-01-01

    Experiments have shown that the performance of instantaneous gradient flow beamforming by Cauwenberghs et al. is reduced significantly in reverberant conditions. By expanding the gradient flow principle to convolutive mixtures, separation in a reverberant environment is possible. By use...... of a circular four microphone array with a radius of 5 mm, and applying convolutive gradient flow instead of just applying instantaneous gradient flow, experimental results show an improvement of up to around 14 dB can be achieved for simulated impulse responses and up to around 10 dB for a hearing aid...

  18. Intra-individual response variability assessed by ex-gaussian analysis may be a new endophenotype for Attention Deficit / Hyperactivity Disorder

    Directory of Open Access Journals (Sweden)

    Marcela Patricia Henríquez-Henríquez

    2015-01-01

    Full Text Available Intra-individual variability of Response Times (RTisv is considered as potential endophenotype for Attentional Deficit/Hyperactivity Disorder (ADHD. Traditional methods for estimating RTisv lose information regarding Response Times (RTs distribution along the task, with eventual effects on statistical power. Ex-Gaussian analysis captures the dynamic nature of RTisv, estimating normal and exponential components for RT distribution, with specific phenomenological correlates. Here, we applied ex-Gaussian analysis to explore whether intra-individual variability of RTs agrees with criteria proposed by Gottesman and Gould for endophenotypes. Specifically, we evaluated if Normal and/or exponential components of RTs may a Present the stair-like distribution expected for endophenotypes (ADHD>Siblings>Typically Developing children (TD without familiar history of ADHD and b Represent a phenotypic correlate for previously described genetic risk variants. This is a pilot study including 55 subjects (20 ADHD-discordant sibling-pairs and 15 TD children, all aged between 8 and 13 years. Participants resolved a visual Go/Nogo with 10% Nogo probability. Ex-Gaussian distributions were fitted to individual RT data and compared among the three samples. In order to test whether intra-individual variability may represent a correlate for previously described genetic risk variants, VNTRs at DRD4 and SLC6A3 were identified in all sibling pairs following standard protocols. Groups were compared adjusting independent general linear models for the exponential and normal components from the ex-gaussian analysis. Identified trends were confirmed by the non-parametric Jonckheere-Terpstra test. Stair-like distributions were observed for μ (p=0.036 and σ (p=0.009. An additional DRD4-genotype X clinical status interaction was present for τ (p=0,014 reflecting a possible severity factor. Thus, Normal and exponential RTisv components are suitable as ADHD endophenotypes.

  19. Convolution-deconvolution in DIGES

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Simos, N.

    1995-01-01

    Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities

  20. Convolutional coding techniques for data protection

    Science.gov (United States)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  1. The convolution transform

    CERN Document Server

    Hirschman, Isidore Isaac

    2005-01-01

    In studies of general operators of the same nature, general convolution transforms are immediately encountered as the objects of inversion. The relation between differential operators and integral transforms is the basic theme of this work, which is geared toward upper-level undergraduates and graduate students. It may be read easily by anyone with a working knowledge of real and complex variable theory. Topics include the finite and non-finite kernels, variation diminishing transforms, asymptotic behavior of kernels, real inversion theory, representation theory, the Weierstrass transform, and

  2. Non-gaussian turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Hoejstrup, J [NEG Micon Project Development A/S, Randers (Denmark); Hansen, K S [Denmarks Technical Univ., Dept. of Energy Engineering, Lyngby (Denmark); Pedersen, B J [VESTAS Wind Systems A/S, Lem (Denmark); Nielsen, M [Risoe National Lab., Wind Energy and Atmospheric Physics, Roskilde (Denmark)

    1999-03-01

    The pdf`s of atmospheric turbulence have somewhat wider tails than a Gaussian, especially regarding accelerations, whereas velocities are close to Gaussian. This behaviour is being investigated using data from a large WEB-database in order to quantify the amount of non-Gaussianity. Models for non-Gaussian turbulence have been developed, by which artificial turbulence can be generated with specified distributions, spectra and cross-correlations. The artificial time series will then be used in load models and the resulting loads in the Gaussian and the non-Gaussian cases will be compared. (au)

  3. Separating Underdetermined Convolutive Speech Mixtures

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Wang, DeLiang; Larsen, Jan

    2006-01-01

    a method for underdetermined blind source separation of convolutive mixtures. The proposed framework is applicable for separation of instantaneous as well as convolutive speech mixtures. It is possible to iteratively extract each speech signal from the mixture by combining blind source separation...

  4. Strongly-MDS convolutional codes

    NARCIS (Netherlands)

    Gluesing-Luerssen, H; Rosenthal, J; Smarandache, R

    Maximum-distance separable (MDS) convolutional codes have the property that their free distance is maximal among all codes of the same rate and the same degree. In this paper, a class of MDS convolutional codes is introduced whose column distances reach the generalized Singleton bound at the

  5. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-12-01

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high-dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaicing and 4D light field view synthesis.

  6. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-04-11

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaickingand 4D light field view synthesis.

  7. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup; Swanson, Robin; Heide, Felix; Wetzstein, Gordon; Heidrich, Wolfgang

    2017-01-01

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high-dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaicing and 4D light field view synthesis.

  8. Prediction error variance and expected response to selection, when selection is based on the best predictor - for Gaussian and threshold characters, traits following a Poisson mixed model and survival traits

    DEFF Research Database (Denmark)

    Andersen, Anders Holst; Korsgaard, Inge Riis; Jensen, Just

    2002-01-01

    In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed...... or random effects). In the different models, expressions are given (when these can be found - otherwise unbiased estimates are given) for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non...... Gaussian traits are generalisations of the well-known formulas for Gaussian traits - and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part...

  9. Dealiased convolutions for pseudospectral simulations

    International Nuclear Information System (INIS)

    Roberts, Malcolm; Bowman, John C

    2011-01-01

    Efficient algorithms have recently been developed for calculating dealiased linear convolution sums without the expense of conventional zero-padding or phase-shift techniques. For one-dimensional in-place convolutions, the memory requirements are identical with the zero-padding technique, with the important distinction that the additional work memory need not be contiguous with the input data. This decoupling of data and work arrays dramatically reduces the memory and computation time required to evaluate higher-dimensional in-place convolutions. The memory savings is achieved by computing the in-place Fourier transform of the data in blocks, rather than all at once. The technique also allows one to dealias the n-ary convolutions that arise on Fourier transforming cubic and higher powers. Implicitly dealiased convolutions can be built on top of state-of-the-art adaptive fast Fourier transform libraries like FFTW. Vectorized multidimensional implementations for the complex and centered Hermitian (pseudospectral) cases have already been implemented in the open-source software FFTW++. With the advent of this library, writing a high-performance dealiased pseudospectral code for solving nonlinear partial differential equations has now become a relatively straightforward exercise. New theoretical estimates of computational complexity and memory use are provided, including corrected timing results for 3D pruned convolutions and further consideration of higher-order convolutions.

  10. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.

    Science.gov (United States)

    Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L

    2018-04-01

    In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

  11. Gaussian entanglement revisited

    Science.gov (United States)

    Lami, Ludovico; Serafini, Alessio; Adesso, Gerardo

    2018-02-01

    We present a novel approach to the separability problem for Gaussian quantum states of bosonic continuous variable systems. We derive a simplified necessary and sufficient separability criterion for arbitrary Gaussian states of m versus n modes, which relies on convex optimisation over marginal covariance matrices on one subsystem only. We further revisit the currently known results stating the equivalence between separability and positive partial transposition (PPT) for specific classes of Gaussian states. Using techniques based on matrix analysis, such as Schur complements and matrix means, we then provide a unified treatment and compact proofs of all these results. In particular, we recover the PPT-separability equivalence for: (i) Gaussian states of 1 versus n modes; and (ii) isotropic Gaussian states. In passing, we also retrieve (iii) the recently established equivalence between separability of a Gaussian state and and its complete Gaussian extendability. Our techniques are then applied to progress beyond the state of the art. We prove that: (iv) Gaussian states that are invariant under partial transposition are necessarily separable; (v) the PPT criterion is necessary and sufficient for separability for Gaussian states of m versus n modes that are symmetric under the exchange of any two modes belonging to one of the parties; and (vi) Gaussian states which remain PPT under passive optical operations can not be entangled by them either. This is not a foregone conclusion per se (since Gaussian bound entangled states do exist) and settles a question that had been left unanswered in the existing literature on the subject. This paper, enjoyable by both the quantum optics and the matrix analysis communities, overall delivers technical and conceptual advances which are likely to be useful for further applications in continuous variable quantum information theory, beyond the separability problem.

  12. A New Reverberator Based on Variable Sparsity Convolution

    DEFF Research Database (Denmark)

    Holm-Rasmussen, Bo; Lehtonen, Heidi-Maria; Välimäki, Vesa

    2013-01-01

    FIR filter coefficients are selected from a velvet noise sequence, which consists of ones, minus ones, and zeros only. In this application, it is sufficient perceptually to use very sparse velvet noise sequences having only about 0.1 to 0.2% non-zero elements, with increasing sparsity along...... the impulse response. The algorithm yields a parametric approximation of the late part of the impulse response, which is more than 100 times more efficient computationally than the direct convolution. The computational load of the proposed algorithm is comparable to that of FFT-based partitioned convolution...

  13. A convolution method for predicting mean treatment dose including organ motion at imaging

    International Nuclear Information System (INIS)

    Booth, J.T.; Zavgorodni, S.F.; Royal Adelaide Hospital, SA

    2000-01-01

    Full text: The random treatment delivery errors (organ motion and set-up error) can be incorporated into the treatment planning software using a convolution method. Mean treatment dose is computed as the convolution of a static dose distribution with a variation kernel. Typically this variation kernel is Gaussian with variance equal to the sum of the organ motion and set-up error variances. We propose a novel variation kernel for the convolution technique that additionally considers the position of the mobile organ in the planning CT image. The systematic error of organ position in the planning CT image can be considered random for each patient over a population. Thus the variance of the variation kernel will equal the sum of treatment delivery variance and organ motion variance at planning for the population of treatments. The kernel is extended to deal with multiple pre-treatment CT scans to improve tumour localisation for planning. Mean treatment doses calculated with the convolution technique are compared to benchmark Monte Carlo (MC) computations. Calculations of mean treatment dose using the convolution technique agreed with MC results for all cases to better than ± 1 Gy in the planning treatment volume for a prescribed 60 Gy treatment. Convolution provides a quick method of incorporating random organ motion (captured in the planning CT image and during treatment delivery) and random set-up errors directly into the dose distribution. Copyright (2000) Australasian College of Physical Scientists and Engineers in Medicine

  14. Linear diffusion-wave channel routing using a discrete Hayami convolution method

    Science.gov (United States)

    Li Wang; Joan Q. Wu; William J. Elliot; Fritz R. Feidler; Sergey. Lapin

    2014-01-01

    The convolution of an input with a response function has been widely used in hydrology as a means to solve various problems analytically. Due to the high computation demand in solving the functions using numerical integration, it is often advantageous to use the discrete convolution instead of the integration of the continuous functions. This approach greatly reduces...

  15. Design of convolutional tornado code

    Science.gov (United States)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  16. Eliminating the non-Gaussian spectral response of X-ray absorbers for transition-edge sensors

    Science.gov (United States)

    Yan, Daikang; Divan, Ralu; Gades, Lisa M.; Kenesei, Peter; Madden, Timothy J.; Miceli, Antonino; Park, Jun-Sang; Patel, Umeshkumar M.; Quaranta, Orlando; Sharma, Hemant; Bennett, Douglas A.; Doriese, William B.; Fowler, Joseph W.; Gard, Johnathon D.; Hays-Wehle, James P.; Morgan, Kelsey M.; Schmidt, Daniel R.; Swetz, Daniel S.; Ullom, Joel N.

    2017-11-01

    Transition-edge sensors (TESs) as microcalorimeters for high-energy-resolution X-ray spectroscopy are often fabricated with an absorber made of materials with high Z (for X-ray stopping power) and low heat capacity (for high resolving power). Bismuth represents one of the most compelling options. TESs with evaporated bismuth absorbers have shown spectra with undesirable and unexplained low-energy tails. We have developed TESs with electroplated bismuth absorbers over a gold layer that are not afflicted by this problem and that retain the other positive aspects of this material. To better understand these phenomena, we have studied a series of TESs with gold, gold/evaporated bismuth, and gold/electroplated bismuth absorbers, fabricated on the same die with identical thermal coupling. We show that the bismuth morphology is linked to the spectral response of X-ray TES microcalorimeters.

  17. Vortices in Gaussian beams

    CSIR Research Space (South Africa)

    Roux, FS

    2009-01-01

    Full Text Available , t0)} = P(du, dv) {FR{g(u, v, t0)}} Replacement: u→ du = t− t0 i2 ∂ ∂u′ v → dv = t− t0 i2 ∂ ∂v′ CSIR National Laser Centre – p.13/30 Differentiation i.s.o integration Evaluate the integral over the Gaussian beam (once and for all). Then, instead... . Gaussian beams with vortex dipoles CSIR National Laser Centre – p.2/30 Gaussian beam notation Gaussian beam in normalised coordinates: g(u, v, t) = exp ( −u 2 + v2 1− it ) u = xω0 v = yω0 t = zρ ρ = piω20 λ ω0 — 1/e2 beam waist radius; ρ— Rayleigh range ω ω...

  18. Solutions to Arithmetic Convolution Equations

    Czech Academy of Sciences Publication Activity Database

    Glöckner, H.; Lucht, L.G.; Porubský, Štefan

    2007-01-01

    Roč. 135, č. 6 (2007), s. 1619-1629 ISSN 0002-9939 R&D Projects: GA ČR GA201/04/0381 Institutional research plan: CEZ:AV0Z10300504 Keywords : arithmetic functions * Dirichlet convolution * polynomial equations * analytic equations * topological algebras * holomorphic functional calculus Subject RIV: BA - General Mathematics Impact factor: 0.520, year: 2007

  19. Gaussian operations and privacy

    International Nuclear Information System (INIS)

    Navascues, Miguel; Acin, Antonio

    2005-01-01

    We consider the possibilities offered by Gaussian states and operations for two honest parties, Alice and Bob, to obtain privacy against a third eavesdropping party, Eve. We first extend the security analysis of the protocol proposed in [Navascues et al. Phys. Rev. Lett. 94, 010502 (2005)]. Then, we prove that a generalized version of this protocol does not allow one to distill a secret key out of bound entangled Gaussian states

  20. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    Science.gov (United States)

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  1. Prediction error variance and expected response to selection, when selection is based on the best predictor – for Gaussian and threshold characters, traits following a Poisson mixed model and survival traits

    Directory of Open Access Journals (Sweden)

    Jensen Just

    2002-05-01

    Full Text Available Abstract In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed or random effects. In the different models, expressions are given (when these can be found – otherwise unbiased estimates are given for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non Gaussian traits are generalisations of the well-known formulas for Gaussian traits – and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part of the model (heritability on the normally distributed level of the model or a generalised version of heritability plays a central role in these formulas.

  2. Nonclassicality by Local Gaussian Unitary Operations for Gaussian States

    Directory of Open Access Journals (Sweden)

    Yangyang Wang

    2018-04-01

    Full Text Available A measure of nonclassicality N in terms of local Gaussian unitary operations for bipartite Gaussian states is introduced. N is a faithful quantum correlation measure for Gaussian states as product states have no such correlation and every non product Gaussian state contains it. For any bipartite Gaussian state ρ A B , we always have 0 ≤ N ( ρ A B < 1 , where the upper bound 1 is sharp. An explicit formula of N for ( 1 + 1 -mode Gaussian states and an estimate of N for ( n + m -mode Gaussian states are presented. A criterion of entanglement is established in terms of this correlation. The quantum correlation N is also compared with entanglement, Gaussian discord and Gaussian geometric discord.

  3. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  4. Adaptive Graph Convolutional Neural Networks

    OpenAIRE

    Li, Ruoyu; Wang, Sheng; Zhu, Feiyun; Huang, Junzhou

    2018-01-01

    Graph Convolutional Neural Networks (Graph CNNs) are generalizations of classical CNNs to handle graph data such as molecular data, point could and social networks. Current filters in graph CNNs are built for fixed and shared graph structure. However, for most real data, the graph structures varies in both size and connectivity. The paper proposes a generalized and flexible graph CNN taking data of arbitrary graph structure as input. In that way a task-driven adaptive graph is learned for eac...

  5. Learning conditional Gaussian networks

    DEFF Research Database (Denmark)

    Bøttcher, Susanne Gammelgaard

    This paper considers conditional Gaussian networks. The parameters in the network are learned by using conjugate Bayesian analysis. As conjugate local priors, we apply the Dirichlet distribution for discrete variables and the Gaussian-inverse gamma distribution for continuous variables, given...... a configuration of the discrete parents. We assume parameter independence and complete data. Further, to learn the structure of the network, the network score is deduced. We then develop a local master prior procedure, for deriving parameter priors in these networks. This procedure satisfies parameter...... independence, parameter modularity and likelihood equivalence. Bayes factors to be used in model search are introduced. Finally the methods derived are illustrated by a simple example....

  6. Convolutional Dictionary Learning: Acceleration and Convergence

    Science.gov (United States)

    Chun, Il Yong; Fessler, Jeffrey A.

    2018-04-01

    Convolutional dictionary learning (CDL or sparsifying CDL) has many applications in image processing and computer vision. There has been growing interest in developing efficient algorithms for CDL, mostly relying on the augmented Lagrangian (AL) method or the variant alternating direction method of multipliers (ADMM). When their parameters are properly tuned, AL methods have shown fast convergence in CDL. However, the parameter tuning process is not trivial due to its data dependence and, in practice, the convergence of AL methods depends on the AL parameters for nonconvex CDL problems. To moderate these problems, this paper proposes a new practically feasible and convergent Block Proximal Gradient method using a Majorizer (BPG-M) for CDL. The BPG-M-based CDL is investigated with different block updating schemes and majorization matrix designs, and further accelerated by incorporating some momentum coefficient formulas and restarting techniques. All of the methods investigated incorporate a boundary artifacts removal (or, more generally, sampling) operator in the learning model. Numerical experiments show that, without needing any parameter tuning process, the proposed BPG-M approach converges more stably to desirable solutions of lower objective values than the existing state-of-the-art ADMM algorithm and its memory-efficient variant do. Compared to the ADMM approaches, the BPG-M method using a multi-block updating scheme is particularly useful in single-threaded CDL algorithm handling large datasets, due to its lower memory requirement and no polynomial computational complexity. Image denoising experiments show that, for relatively strong additive white Gaussian noise, the filters learned by BPG-M-based CDL outperform those trained by the ADMM approach.

  7. Convolution of Distribution-Valued Functions. Applications.

    OpenAIRE

    BARGETZ, CHRISTIAN

    2011-01-01

    In this article we examine products and convolutions of vector-valued functions. For nuclear normal spaces of distributions Proposition 25 in [31,p. 120] yields a vector-valued product or convolution if there is a continuous product or convolution mapping in the range of the vector-valued functions. For specific spaces, we generalize this result to hypocontinuous bilinear maps at the expense of generality with respect to the function space. We consider holomorphic, meromorphic and differentia...

  8. AUTONOMOUS GAUSSIAN DECOMPOSITION

    Energy Technology Data Exchange (ETDEWEB)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian [Department of Astronomy, University of Wisconsin, 475 North Charter Street, Madison, WI 53706 (United States); Heiles, Carl [Radio Astronomy Lab, UC Berkeley, 601 Campbell Hall, Berkeley, CA 94720 (United States); Hennebelle, Patrick [Laboratoire AIM, Paris-Saclay, CEA/IRFU/SAp-CNRS-Université Paris Diderot, F-91191 Gif-sur Yvette Cedex (France); Goss, W. M. [National Radio Astronomy Observatory, P.O. Box O, 1003 Lopezville, Socorro, NM 87801 (United States); Dickey, John, E-mail: rlindner@astro.wisc.edu [University of Tasmania, School of Maths and Physics, Private Bag 37, Hobart, TAS 7001 (Australia)

    2015-04-15

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.

  9. Bounded Gaussian process regression

    DEFF Research Database (Denmark)

    Jensen, Bjørn Sand; Nielsen, Jens Brehm; Larsen, Jan

    2013-01-01

    We extend the Gaussian process (GP) framework for bounded regression by introducing two bounded likelihood functions that model the noise on the dependent variable explicitly. This is fundamentally different from the implicit noise assumption in the previously suggested warped GP framework. We...... with the proposed explicit noise-model extension....

  10. AUTONOMOUS GAUSSIAN DECOMPOSITION

    International Nuclear Information System (INIS)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian; Heiles, Carl; Hennebelle, Patrick; Goss, W. M.; Dickey, John

    2015-01-01

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes

  11. Convolution product construction of interactions in probabilistic physical models

    International Nuclear Information System (INIS)

    Ratsimbarison, H.M.; Raboanary, R.

    2007-01-01

    This paper aims to give a probabilistic construction of interactions which may be relevant for building physical theories such as interacting quantum field theories. We start with the path integral definition of partition function in quantum field theory which recall us the probabilistic nature of this physical theory. From a Gaussian law considered as free theory, an interacting theory is constructed by nontrivial convolution product between the free theory and an interacting term which is also a probability law. The resulting theory, again a probability law, exhibits two proprieties already present in nowadays theories of interactions such as Gauge theory : the interaction term does not depend on the free term, and two different free theories can be implemented with the same interaction.

  12. Quantum information with Gaussian states

    International Nuclear Information System (INIS)

    Wang Xiangbin; Hiroshima, Tohya; Tomita, Akihisa; Hayashi, Masahito

    2007-01-01

    Quantum optical Gaussian states are a type of important robust quantum states which are manipulatable by the existing technologies. So far, most of the important quantum information experiments are done with such states, including bright Gaussian light and weak Gaussian light. Extending the existing results of quantum information with discrete quantum states to the case of continuous variable quantum states is an interesting theoretical job. The quantum Gaussian states play a central role in such a case. We review the properties and applications of Gaussian states in quantum information with emphasis on the fundamental concepts, the calculation techniques and the effects of imperfections of the real-life experimental setups. Topics here include the elementary properties of Gaussian states and relevant quantum information device, entanglement-based quantum tasks such as quantum teleportation, quantum cryptography with weak and strong Gaussian states and the quantum channel capacity, mathematical theory of quantum entanglement and state estimation for Gaussian states

  13. Gaussian discriminating strength

    Science.gov (United States)

    Rigovacca, L.; Farace, A.; De Pasquale, A.; Giovannetti, V.

    2015-10-01

    We present a quantifier of nonclassical correlations for bipartite, multimode Gaussian states. It is derived from the Discriminating Strength measure, introduced for finite dimensional systems in Farace et al., [New J. Phys. 16, 073010 (2014), 10.1088/1367-2630/16/7/073010]. As the latter the new measure exploits the quantum Chernoff bound to gauge the susceptibility of the composite system with respect to local perturbations induced by unitary gates extracted from a suitable set of allowed transformations (the latter being identified by posing some general requirements). Closed expressions are provided for the case of two-mode Gaussian states obtained by squeezing or by linearly mixing via a beam splitter a factorized two-mode thermal state. For these density matrices, we study how nonclassical correlations are related with the entanglement present in the system and with its total photon number.

  14. Linear velocity fields in non-Gaussian models for large-scale structure

    Science.gov (United States)

    Scherrer, Robert J.

    1992-01-01

    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  15. Incomplete convolutions in production and inventory models

    NARCIS (Netherlands)

    Houtum, van G.J.J.A.N.; Zijm, W.H.M.

    1997-01-01

    In this paper, we study incomplete convolutions of continuous distribution functions, as they appear in the analysis of (multi-stage) production and inventory systems. Three example systems are discussed where these incomplete convolutions naturally arise. We derive explicit, nonrecursive formulae

  16. A convolutional approach to reflection symmetry

    DEFF Research Database (Denmark)

    Cicconet, Marcelo; Birodkar, Vighnesh; Lund, Mads

    2017-01-01

    We present a convolutional approach to reflection symmetry detection in 2D. Our model, built on the products of complex-valued wavelet convolutions, simplifies previous edge-based pairwise methods. Being parameter-centered, as opposed to feature-centered, it has certain computational advantages w...

  17. Symbol synchronization in convolutionally coded systems

    Science.gov (United States)

    Baumert, L. D.; Mceliece, R. J.; Van Tilborg, H. C. A.

    1979-01-01

    Alternate symbol inversion is sometimes applied to the output of convolutional encoders to guarantee sufficient richness of symbol transition for the receiver symbol synchronizer. A bound is given for the length of the transition-free symbol stream in such systems, and those convolutional codes are characterized in which arbitrarily long transition free runs occur.

  18. The general theory of convolutional codes

    Science.gov (United States)

    Mceliece, R. J.; Stanley, R. P.

    1993-01-01

    This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.

  19. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  20. Convolutions

    Indian Academy of Sciences (India)

    President's Address to the Association of Mathematics Teachers of India, December 2011. I am expected to tell you, in 25 minutes, something that should interest you, excite you, pique your curiosity, and make you look for more. It is a tall order, but I will try. The word 'interactive' is in fashion these days. So I will leave a few ...

  1. Perfusion Quantification Using Gaussian Process Deconvolution

    DEFF Research Database (Denmark)

    Andersen, Irene Klærke; Have, Anna Szynkowiak; Rasmussen, Carl Edward

    2002-01-01

    The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated...

  2. A Hierarchical Convolutional Neural Network for vesicle fusion event classification.

    Science.gov (United States)

    Li, Haohan; Mao, Yunxiang; Yin, Zhaozheng; Xu, Yingke

    2017-09-01

    Quantitative analysis of vesicle exocytosis and classification of different modes of vesicle fusion from the fluorescence microscopy are of primary importance for biomedical researches. In this paper, we propose a novel Hierarchical Convolutional Neural Network (HCNN) method to automatically identify vesicle fusion events in time-lapse Total Internal Reflection Fluorescence Microscopy (TIRFM) image sequences. Firstly, a detection and tracking method is developed to extract image patch sequences containing potential fusion events. Then, a Gaussian Mixture Model (GMM) is applied on each image patch of the patch sequence with outliers rejected for robust Gaussian fitting. By utilizing the high-level time-series intensity change features introduced by GMM and the visual appearance features embedded in some key moments of the fusion process, the proposed HCNN architecture is able to classify each candidate patch sequence into three classes: full fusion event, partial fusion event and non-fusion event. Finally, we validate the performance of our method on 9 challenging datasets that have been annotated by cell biologists, and our method achieves better performances when comparing with three previous methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. The application of convolution-based statistical model on the electrical breakdown time delay distributions in neon

    International Nuclear Information System (INIS)

    Maluckov, Cedomir A.; Karamarkovic, Jugoslav P.; Radovic, Miodrag K.; Pejovic, Momcilo M.

    2004-01-01

    The convolution-based model of the electrical breakdown time delay distribution is applied for statistical analysis of experimental results obtained in neon-filled diode tube at 6.5 mbar. At first, the numerical breakdown time delay density distributions are obtained by stochastic modeling as the sum of two independent random variables, the electrical breakdown statistical time delay with exponential, and discharge formative time with Gaussian distribution. Then, the single characteristic breakdown time delay distribution is obtained as the convolution of these two random variables with previously determined parameters. These distributions show good correspondence with the experimental distributions, obtained on the basis of 1000 successive and independent measurements. The shape of distributions is investigated, and corresponding skewness and kurtosis are plotted, in order to follow the transition from Gaussian to exponential distribution

  4. Interconversion of pure Gaussian states requiring non-Gaussian operations

    Science.gov (United States)

    Jabbour, Michael G.; García-Patrón, Raúl; Cerf, Nicolas J.

    2015-01-01

    We analyze the conditions under which local operations and classical communication enable entanglement transformations between bipartite pure Gaussian states. A set of necessary and sufficient conditions had been found [G. Giedke et al., Quant. Inf. Comput. 3, 211 (2003)] for the interconversion between such states that is restricted to Gaussian local operations and classical communication. Here, we exploit majorization theory in order to derive more general (sufficient) conditions for the interconversion between bipartite pure Gaussian states that goes beyond Gaussian local operations. While our technique is applicable to an arbitrary number of modes for each party, it allows us to exhibit surprisingly simple examples of 2 ×2 Gaussian states that necessarily require non-Gaussian local operations to be transformed into each other.

  5. Sums and Gaussian vectors

    CERN Document Server

    Yurinsky, Vadim Vladimirovich

    1995-01-01

    Surveys the methods currently applied to study sums of infinite-dimensional independent random vectors in situations where their distributions resemble Gaussian laws. Covers probabilities of large deviations, Chebyshev-type inequalities for seminorms of sums, a method of constructing Edgeworth-type expansions, estimates of characteristic functions for random vectors obtained by smooth mappings of infinite-dimensional sums to Euclidean spaces. A self-contained exposition of the modern research apparatus around CLT, the book is accessible to new graduate students, and can be a useful reference for researchers and teachers of the subject.

  6. Adaptive decoding of convolutional codes

    Directory of Open Access Journals (Sweden)

    K. Hueske

    2007-06-01

    Full Text Available Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  7. Adaptive decoding of convolutional codes

    Science.gov (United States)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  8. The quick convolution of galaxy profiles, with application to power-law intensity distributions

    International Nuclear Information System (INIS)

    Bailey, M.E.; Sparks, W.B.

    1983-01-01

    The two-dimensional convolution of a circularly symmetric galaxy model with a Gaussian point-spread function of dispersion σ reduces to a single integral. This is solved analytically for models with power-law intensity distributions and results are given which relate the apparent core radius to σ and the power-law index k. The convolution integral is also simplified for the case of a point-spread function corresponding to a circular aperture. Models of galactic nuclei with stellar density cusps can only be distinguished from alternatives with small core radii if both the brightness and seeing profiles are measured accurately. The results are applied to data on the light distribution at the Galactic Centre. (author)

  9. Convolutional Neural Network for Image Recognition

    CERN Document Server

    Seifnashri, Sahand

    2015-01-01

    The aim of this project is to use machine learning techniques especially Convolutional Neural Networks for image processing. These techniques can be used for Quark-Gluon discrimination using calorimeters data, but unfortunately I didn’t manage to get the calorimeters data and I just used the Jet data fromminiaodsim(ak4 chs). The Jet data was not good enough for Convolutional Neural Network which is designed for ’image’ recognition. This report is made of twomain part, part one is mainly about implementing Convolutional Neural Network on unphysical data such as MNIST digits and CIFAR-10 dataset and part 2 is about the Jet data.

  10. Rotating quantum Gaussian packets

    International Nuclear Information System (INIS)

    Dodonov, V V

    2015-01-01

    We study two-dimensional quantum Gaussian packets with a fixed value of mean angular momentum. This value is the sum of two independent parts: the ‘external’ momentum related to the motion of the packet center and the ‘internal’ momentum due to quantum fluctuations. The packets minimizing the mean energy of an isotropic oscillator with the fixed mean angular momentum are found. They exist for ‘co-rotating’ external and internal motions, and they have nonzero correlation coefficients between coordinates and momenta, together with some (moderate) amount of quadrature squeezing. Variances of angular momentum and energy are calculated, too. Differences in the behavior of ‘co-rotating’ and ‘anti-rotating’ packets are shown. The time evolution of rotating Gaussian packets is analyzed, including the cases of a charge in a homogeneous magnetic field and a free particle. In the latter case, the effect of initial shrinking of packets with big enough coordinate-momentum correlation coefficients (followed by the well known expansion) is discovered. This happens due to a competition of ‘focusing’ and ‘de-focusing’ in the orthogonal directions. (paper)

  11. A Note on Cubic Convolution Interpolation

    OpenAIRE

    Meijering, E.; Unser, M.

    2003-01-01

    We establish a link between classical osculatory interpolation and modern convolution-based interpolation and use it to show that two well-known cubic convolution schemes are formally equivalent to two osculatory interpolation schemes proposed in the actuarial literature about a century ago. We also discuss computational differences and give examples of other cubic interpolation schemes not previously studied in signal and image processing.

  12. Convolution-based classification of audio and symbolic representations of music

    DEFF Research Database (Denmark)

    Velarde, Gissel; Cancino Chacón, Carlos; Meredith, David

    2018-01-01

    We present a novel convolution-based method for classification of audio and symbolic representations of music, which we apply to classification of music by style. Pieces of music are first sampled to pitch–time representations (piano-rolls or spectrograms) and then convolved with a Gaussian filter......-class composer identification, methods specialised for classifying symbolic representations of music are more effective. We also performed experiments on symbolic representations, synthetic audio and two different recordings of The Well-Tempered Clavier by J. S. Bach to study the method’s capacity to distinguish...

  13. Holographic non-Gaussianity

    International Nuclear Information System (INIS)

    McFadden, Paul; Skenderis, Kostas

    2011-01-01

    We investigate the non-Gaussianity of primordial cosmological perturbations within our recently proposed holographic description of inflationary universes. We derive a holographic formula that determines the bispectrum of cosmological curvature perturbations in terms of correlation functions of a holographically dual three-dimensional non-gravitational quantum field theory (QFT). This allows us to compute the primordial bispectrum for a universe which started in a non-geometric holographic phase, using perturbative QFT calculations. Strikingly, for a class of models specified by a three-dimensional super-renormalisable QFT, the primordial bispectrum is of exactly the factorisable equilateral form with f NL equil. = 5/36, irrespective of the details of the dual QFT. A by-product of this investigation is a holographic formula for the three-point function of the trace of the stress-energy tensor along general holographic RG flows, which should have applications outside the remit of this work

  14. Palm distributions for log Gaussian Cox processes

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus Plenge

    2017-01-01

    This paper establishes a remarkable result regarding Palm distributions for a log Gaussian Cox process: the reduced Palm distribution for a log Gaussian Cox process is itself a log Gaussian Cox process that only differs from the original log Gaussian Cox process in the intensity function. This new...... result is used to study functional summaries for log Gaussian Cox processes....

  15. Geometry of Gaussian quantum states

    International Nuclear Information System (INIS)

    Link, Valentin; Strunz, Walter T

    2015-01-01

    We study the Hilbert–Schmidt measure on the manifold of mixed Gaussian states in multi-mode continuous variable quantum systems. An analytical expression for the Hilbert–Schmidt volume element is derived. Its corresponding probability measure can be used to study typical properties of Gaussian states. It turns out that although the manifold of Gaussian states is unbounded, an ensemble of Gaussian states distributed according to this measure still has a normalizable distribution of symplectic eigenvalues, from which unitarily invariant properties can be obtained. By contrast, we find that for an ensemble of one-mode Gaussian states based on the Bures measure the corresponding distribution cannot be normalized. As important applications, we determine the distribution and the mean value of von Neumann entropy and purity for the Hilbert–Schmidt measure. (paper)

  16. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    Science.gov (United States)

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  17. Resource theory of non-Gaussian operations

    Science.gov (United States)

    Zhuang, Quntao; Shor, Peter W.; Shapiro, Jeffrey H.

    2018-05-01

    Non-Gaussian states and operations are crucial for various continuous-variable quantum information processing tasks. To quantitatively understand non-Gaussianity beyond states, we establish a resource theory for non-Gaussian operations. In our framework, we consider Gaussian operations as free operations, and non-Gaussian operations as resources. We define entanglement-assisted non-Gaussianity generating power and show that it is a monotone that is nonincreasing under the set of free superoperations, i.e., concatenation and tensoring with Gaussian channels. For conditional unitary maps, this monotone can be analytically calculated. As examples, we show that the non-Gaussianity of ideal photon-number subtraction and photon-number addition equal the non-Gaussianity of the single-photon Fock state. Based on our non-Gaussianity monotone, we divide non-Gaussian operations into two classes: (i) the finite non-Gaussianity class, e.g., photon-number subtraction, photon-number addition, and all Gaussian-dilatable non-Gaussian channels; and (ii) the diverging non-Gaussianity class, e.g., the binary phase-shift channel and the Kerr nonlinearity. This classification also implies that not all non-Gaussian channels are exactly Gaussian dilatable. Our resource theory enables a quantitative characterization and a first classification of non-Gaussian operations, paving the way towards the full understanding of non-Gaussianity.

  18. Handbook of Gaussian basis sets

    International Nuclear Information System (INIS)

    Poirier, R.; Kari, R.; Csizmadia, I.G.

    1985-01-01

    A collection of a large body of information is presented useful for chemists involved in molecular Gaussian computations. Every effort has been made by the authors to collect all available data for cartesian Gaussian as found in the literature up to July of 1984. The data in this text includes a large collection of polarization function exponents but in this case the collection is not complete. Exponents for Slater type orbitals (STO) were included for completeness. This text offers a collection of Gaussian exponents primarily without criticism. (Auth.)

  19. A staggered-grid convolutional differentiator for elastic wave modelling

    Science.gov (United States)

    Sun, Weijia; Zhou, Binzhong; Fu, Li-Yun

    2015-11-01

    The computation of derivatives in governing partial differential equations is one of the most investigated subjects in the numerical simulation of physical wave propagation. An analytical staggered-grid convolutional differentiator (CD) for first-order velocity-stress elastic wave equations is derived in this paper by inverse Fourier transformation of the band-limited spectrum of a first derivative operator. A taper window function is used to truncate the infinite staggered-grid CD stencil. The truncated CD operator is almost as accurate as the analytical solution, and as efficient as the finite-difference (FD) method. The selection of window functions will influence the accuracy of the CD operator in wave simulation. We search for the optimal Gaussian windows for different order CDs by minimizing the spectral error of the derivative and comparing the windows with the normal Hanning window function for tapering the CD operators. It is found that the optimal Gaussian window appears to be similar to the Hanning window function for tapering the same CD operator. We investigate the accuracy of the windowed CD operator and the staggered-grid FD method with different orders. Compared to the conventional staggered-grid FD method, a short staggered-grid CD operator achieves an accuracy equivalent to that of a long FD operator, with lower computational costs. For example, an 8th order staggered-grid CD operator can achieve the same accuracy of a 16th order staggered-grid FD algorithm but with half of the computational resources and time required. Numerical examples from a homogeneous model and a crustal waveguide model are used to illustrate the superiority of the CD operators over the conventional staggered-grid FD operators for the simulation of wave propagations.

  20. Identification and estimation of non-Gaussian structural vector autoregressions

    DEFF Research Database (Denmark)

    Lanne, Markku; Meitz, Mika; Saikkonen, Pentti

    -Gaussian components is, without any additional restrictions, identified and leads to (essentially) unique impulse responses. We also introduce an identification scheme under which the maximum likelihood estimator of the non-Gaussian SVAR model is consistent and asymptotically normally distributed. As a consequence......, additional economic identifying restrictions can be tested. In an empirical application, we find a negative impact of a contractionary monetary policy shock on financial markets, and clearly reject the commonly employed recursive identifying restrictions....

  1. Semantic segmentation of bioimages using convolutional neural networks

    CSIR Research Space (South Africa)

    Wiehman, S

    2016-07-01

    Full Text Available Convolutional neural networks have shown great promise in both general image segmentation problems as well as bioimage segmentation. In this paper, the application of different convolutional network architectures is explored on the C. elegans live...

  2. One weird trick for parallelizing convolutional neural networks

    OpenAIRE

    Krizhevsky, Alex

    2014-01-01

    I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural networks.

  3. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  4. CMOS Compressed Imaging by Random Convolution

    OpenAIRE

    Jacques, Laurent; Vandergheynst, Pierre; Bibet, Alexandre; Majidzadeh, Vahid; Schmid, Alexandre; Leblebici, Yusuf

    2009-01-01

    We present a CMOS imager with built-in capability to perform Compressed Sensing. The adopted sensing strategy is the random Convolution due to J. Romberg. It is achieved by a shift register set in a pseudo-random configuration. It acts as a convolutive filter on the imager focal plane, the current issued from each CMOS pixel undergoing a pseudo-random redirection controlled by each component of the filter sequence. A pseudo-random triggering of the ADC reading is finally applied to comp...

  5. Feedback equivalence of convolutional codes over finite rings

    Directory of Open Access Journals (Sweden)

    DeCastro-García Noemí

    2017-12-01

    Full Text Available The approach to convolutional codes from the linear systems point of view provides us with effective tools in order to construct convolutional codes with adequate properties that let us use them in many applications. In this work, we have generalized feedback equivalence between families of convolutional codes and linear systems over certain rings, and we show that every locally Brunovsky linear system may be considered as a representation of a code under feedback convolutional equivalence.

  6. An improved multi-domain convolution tracking algorithm

    Science.gov (United States)

    Sun, Xin; Wang, Haiying; Zeng, Yingsen

    2018-04-01

    Along with the wide application of the Deep Learning in the field of Computer vision, Deep learning has become a mainstream direction in the field of object tracking. The tracking algorithm in this paper is based on the improved multidomain convolution neural network, and the VOT video set is pre-trained on the network by multi-domain training strategy. In the process of online tracking, the network evaluates candidate targets sampled from vicinity of the prediction target in the previous with Gaussian distribution, and the candidate target with the highest score is recognized as the prediction target of this frame. The Bounding Box Regression model is introduced to make the prediction target closer to the ground-truths target box of the test set. Grouping-update strategy is involved to extract and select useful update samples in each frame, which can effectively prevent over fitting. And adapt to changes in both target and environment. To improve the speed of the algorithm while maintaining the performance, the number of candidate target succeed in adjusting dynamically with the help of Self-adaption parameter Strategy. Finally, the algorithm is tested by OTB set, compared with other high-performance tracking algorithms, and the plot of success rate and the accuracy are drawn. which illustrates outstanding performance of the tracking algorithm in this paper.

  7. Discrete convolution-operators and radioactive disintegration. [Numerical solution

    Energy Technology Data Exchange (ETDEWEB)

    Kalla, S L; VALENTINUZZI, M E [UNIVERSIDAD NACIONAL DE TUCUMAN (ARGENTINA). FACULTAD DE CIENCIAS EXACTAS Y TECNOLOGIA

    1975-08-01

    The basic concepts of discrete convolution and discrete convolution-operators are briefly described. Then, using the discrete convolution - operators, the differential equations associated with the process of radioactive disintegration are numerically solved. The importance of the method is emphasized to solve numerically, differential and integral equations.

  8. On the Shaker Simulation of Wind-Induced Non-Gaussian Random Vibration

    Directory of Open Access Journals (Sweden)

    Fei Xu

    2016-01-01

    Full Text Available Gaussian signal is produced by ordinary random vibration controllers to test the products in the laboratory, while the field data is usually non-Gaussian. Two methodologies are presented in this paper for shaker simulation of wind-induced non-Gaussian vibration. The first methodology synthesizes the non-Gaussian signal offline and replicates it on the shaker in the Time Waveform Replication (TWR mode. A new synthesis method is used to model the non-Gaussian signal as a Gaussian signal multiplied by an amplitude modulation function (AMF. A case study is presented to show that the synthesized non-Gaussian signal has the same power spectral density (PSD, probability density function (PDF, and loading cycle distribution (LCD as the field data. The second methodology derives a damage equivalent Gaussian signal from the non-Gaussian signal based on the fatigue damage spectrum (FDS and the extreme response spectrum (ERS and reproduces it on the shaker in the closed-loop frequency domain control mode. The PSD level and the duration time of the derived Gaussian signal can be manipulated for accelerated testing purpose. A case study is presented to show that the derived PSD matches the damage potential of the non-Gaussian environment for both fatigue and peak response.

  9. Convolute laminations — a theoretical analysis: example of a Pennsylvanian sandstone

    Science.gov (United States)

    Visher, Glenn S.; Cunningham, Russ D.

    1981-03-01

    Data from an outcropping laminated interval were collected and analyzed to test the applicability of a theoretical model describing instability of layered systems. Rayleigh—Taylor wave perturbations result at the interface between fluids of contrasting density, viscosity, and thickness. In the special case where reverse density and viscosity interlaminations are developed, the deformation response produces a single wave with predictable amplitudes, wavelengths, and amplification rates. Physical measurements from both the outcropping section and modern sediments suggest the usefulness of the model for the interpretation of convolute laminations. Internal characteristics of the stratigraphic interval, and the developmental sequence of convoluted beds, are used to document the developmental history of these structures.

  10. Dynamic heterogeneity and conditional statistics of non-Gaussian temperature fluctuations in turbulent thermal convection

    Science.gov (United States)

    He, Xiaozhou; Wang, Yin; Tong, Penger

    2018-05-01

    Non-Gaussian fluctuations with an exponential tail in their probability density function (PDF) are often observed in nonequilibrium steady states (NESSs) and one does not understand why they appear so often. Turbulent Rayleigh-Bénard convection (RBC) is an example of such a NESS, in which the measured PDF P (δ T ) of temperature fluctuations δ T in the central region of the flow has a long exponential tail. Here we show that because of the dynamic heterogeneity in RBC, the exponential PDF is generated by a convolution of a set of dynamics modes conditioned on a constant local thermal dissipation rate ɛ . The conditional PDF G (δ T |ɛ ) of δ T under a constant ɛ is found to be of Gaussian form and its variance σT2 for different values of ɛ follows an exponential distribution. The convolution of the two distribution functions gives rise to the exponential PDF P (δ T ) . This work thus provides a physical mechanism of the observed exponential distribution of δ T in RBC and also sheds light on the origin of non-Gaussian fluctuations in other NESSs.

  11. A brain MRI bias field correction method created in the Gaussian multi-scale space

    Science.gov (United States)

    Chen, Mingsheng; Qin, Mingxin

    2017-07-01

    A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.

  12. Deformable image registration using convolutional neural networks

    NARCIS (Netherlands)

    Eppenhof, Koen A.J.; Lafarge, Maxime W.; Moeskops, Pim; Veta, Mitko; Pluim, Josien P.W.

    2018-01-01

    Deformable image registration can be time-consuming and often needs extensive parameterization to perform well on a specific application. We present a step towards a registration framework based on a three-dimensional convolutional neural network. The network directly learns transformations between

  13. Epileptiform spike detection via convolutional neural networks

    DEFF Research Database (Denmark)

    Johansen, Alexander Rosenberg; Jin, Jing; Maszczyk, Tomasz

    2016-01-01

    The EEG of epileptic patients often contains sharp waveforms called "spikes", occurring between seizures. Detecting such spikes is crucial for diagnosing epilepsy. In this paper, we develop a convolutional neural network (CNN) for detecting spikes in EEG of epileptic patients in an automated...

  14. Convolutional Neural Networks for SAR Image Segmentation

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Nobel-Jørgensen, Morten

    2015-01-01

    Segmentation of Synthetic Aperture Radar (SAR) images has several uses, but it is a difficult task due to a number of properties related to SAR images. In this article we show how Convolutional Neural Networks (CNNs) can easily be trained for SAR image segmentation with good results. Besides...

  15. Convolutional Neural Networks - Generalizability and Interpretations

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David

    from data despite it being limited in amount or context representation. Within Machine Learning this thesis focuses on Convolutional Neural Networks for Computer Vision. The research aims to answer how to explore a model's generalizability to the whole population of data samples and how to interpret...

  16. Towards dropout training for convolutional neural networks.

    Science.gov (United States)

    Wu, Haibing; Gu, Xiaodong

    2015-11-01

    Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. A locality aware convolutional neural networks accelerator

    NARCIS (Netherlands)

    Shi, R.; Xu, Z.; Sun, Z.; Peemen, M.C.J.; Li, A.; Corporaal, H.; Wu, D.

    2015-01-01

    The advantages of Convolutional Neural Networks (CNNs) with respect to traditional methods for visual pattern recognition have changed the field of machine vision. The main issue that hinders broad adoption of this technique is the massive computing workload in CNN that prevents real-time

  18. Information geometry of Gaussian channels

    International Nuclear Information System (INIS)

    Monras, Alex; Illuminati, Fabrizio

    2010-01-01

    We define a local Riemannian metric tensor in the manifold of Gaussian channels and the distance that it induces. We adopt an information-geometric approach and define a metric derived from the Bures-Fisher metric for quantum states. The resulting metric inherits several desirable properties from the Bures-Fisher metric and is operationally motivated by distinguishability considerations: It serves as an upper bound to the attainable quantum Fisher information for the channel parameters using Gaussian states, under generic constraints on the physically available resources. Our approach naturally includes the use of entangled Gaussian probe states. We prove that the metric enjoys some desirable properties like stability and covariance. As a by-product, we also obtain some general results in Gaussian channel estimation that are the continuous-variable analogs of previously known results in finite dimensions. We prove that optimal probe states are always pure and bounded in the number of ancillary modes, even in the presence of constraints on the reduced state input in the channel. This has experimental and computational implications. It limits the complexity of optimal experimental setups for channel estimation and reduces the computational requirements for the evaluation of the metric: Indeed, we construct a converging algorithm for its computation. We provide explicit formulas for computing the multiparametric quantum Fisher information for dissipative channels probed with arbitrary Gaussian states and provide the optimal observables for the estimation of the channel parameters (e.g., bath couplings, squeezing, and temperature).

  19. Frozen Gaussian approximation for 3D seismic tomography

    Science.gov (United States)

    Chai, Lihui; Tong, Ping; Yang, Xu

    2018-05-01

    Three-dimensional (3D) wave-equation-based seismic tomography is computationally challenging in large scales and high-frequency regime. In this paper, we apply the frozen Gaussian approximation (FGA) method to compute 3D sensitivity kernels and seismic tomography of high-frequency. Rather than standard ray theory used in seismic inversion (e.g. Kirchhoff migration and Gaussian beam migration), FGA is used to compute the 3D high-frequency sensitivity kernels for travel-time or full waveform inversions. Specifically, we reformulate the equations of the forward and adjoint wavefields for the purpose of convenience to apply FGA, and with this reformulation, one can efficiently compute the Green’s functions whose convolutions with source time function produce wavefields needed for the construction of 3D kernels. Moreover, a fast summation method is proposed based on local fast Fourier transform which greatly improves the speed of reconstruction as the last step of FGA algorithm. We apply FGA to both the travel-time adjoint tomography and full waveform inversion (FWI) on synthetic crosswell seismic data with dominant frequencies as high as those of real crosswell data, and confirm again that FWI requires a more sophisticated initial velocity model for the convergence than travel-time adjoint tomography. We also numerically test the accuracy of applying FGA to local earthquake tomography. This study paves the way to directly apply wave-equation-based seismic tomography methods into real data around their dominant frequencies.

  20. Gaussian entanglement distribution via satellite

    Science.gov (United States)

    Hosseinidehaj, Nedasadat; Malaney, Robert

    2015-02-01

    In this work we analyze three quantum communication schemes for the generation of Gaussian entanglement between two ground stations. Communication occurs via a satellite over two independent atmospheric fading channels dominated by turbulence-induced beam wander. In our first scheme, the engineering complexity remains largely on the ground transceivers, with the satellite acting simply as a reflector. Although the channel state information of the two atmospheric channels remains unknown in this scheme, the Gaussian entanglement generation between the ground stations can still be determined. On the ground, distillation and Gaussification procedures can be applied, leading to a refined Gaussian entanglement generation rate between the ground stations. We compare the rates produced by this first scheme with two competing schemes in which quantum complexity is added to the satellite, thereby illustrating the tradeoff between space-based engineering complexity and the rate of ground-station entanglement generation.

  1. Tachyon mediated non-Gaussianity

    International Nuclear Information System (INIS)

    Dutta, Bhaskar; Leblond, Louis; Kumar, Jason

    2008-01-01

    We describe a general scenario where primordial non-Gaussian curvature perturbations are generated in models with extra scalar fields. The extra scalars communicate to the inflaton sector mainly through the tachyonic (waterfall) field condensing at the end of hybrid inflation. These models can yield significant non-Gaussianity of the local shape, and both signs of the bispectrum can be obtained. These models have cosmic strings and a nearly flat power spectrum, which together have been recently shown to be a good fit to WMAP data. We illustrate with a model of inflation inspired from intersecting brane models.

  2. The Multivariate Gaussian Probability Distribution

    DEFF Research Database (Denmark)

    Ahrendt, Peter

    2005-01-01

    This technical report intends to gather information about the multivariate gaussian distribution, that was previously not (at least to my knowledge) to be found in one place and written as a reference manual. Additionally, some useful tips and tricks are collected that may be useful in practical ...

  3. On Gaussian conditional independence structures

    Czech Academy of Sciences Publication Activity Database

    Lněnička, Radim; Matúš, František

    2007-01-01

    Roč. 43, č. 3 (2007), s. 327-342 ISSN 0023-5954 R&D Projects: GA AV ČR IAA100750603 Institutional research plan: CEZ:AV0Z10750506 Keywords : multivariate Gaussian distribution * positive definite matrices * determinants * gaussoids * covariance selection models * Markov perfectness Subject RIV: BA - General Mathematics Impact factor: 0.552, year: 2007

  4. Gaussian processes for machine learning.

    Science.gov (United States)

    Seeger, Matthias

    2004-04-01

    Gaussian processes (GPs) are natural generalisations of multivariate Gaussian random variables to infinite (countably or continuous) index sets. GPs have been applied in a large number of fields to a diverse range of ends, and very many deep theoretical analyses of various properties are available. This paper gives an introduction to Gaussian processes on a fairly elementary level with special emphasis on characteristics relevant in machine learning. It draws explicit connections to branches such as spline smoothing models and support vector machines in which similar ideas have been investigated. Gaussian process models are routinely used to solve hard machine learning problems. They are attractive because of their flexible non-parametric nature and computational simplicity. Treated within a Bayesian framework, very powerful statistical methods can be implemented which offer valid estimates of uncertainties in our predictions and generic model selection procedures cast as nonlinear optimization problems. Their main drawback of heavy computational scaling has recently been alleviated by the introduction of generic sparse approximations.13,78,31 The mathematical literature on GPs is large and often uses deep concepts which are not required to fully understand most machine learning applications. In this tutorial paper, we aim to present characteristics of GPs relevant to machine learning and to show up precise connections to other "kernel machines" popular in the community. Our focus is on a simple presentation, but references to more detailed sources are provided.

  5. Gas Classification Using Deep Convolutional Neural Networks

    Science.gov (United States)

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-01

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP). PMID:29316723

  6. A convolutional neural network neutrino event classifier

    International Nuclear Information System (INIS)

    Aurisano, A.; Sousa, A.; Radovic, A.; Vahle, P.; Rocco, D.; Pawloski, G.; Himmel, A.; Niner, E.; Messier, M.D.; Psihas, F.

    2016-01-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  7. Gas Classification Using Deep Convolutional Neural Networks.

    Science.gov (United States)

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-08

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP).

  8. Quasi-cyclic unit memory convolutional codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Paaske, Erik; Ballan, Mark

    1990-01-01

    Unit memory convolutional codes with generator matrices, which are composed of circulant submatrices, are introduced. This structure facilitates the analysis of efficient search for good codes. Equivalences among such codes and some of the basic structural properties are discussed. In particular......, catastrophic encoders and minimal encoders are characterized and dual codes treated. Further, various distance measures are discussed, and a number of good codes, some of which result from efficient computer search and some of which result from known block codes, are presented...

  9. Digital Tomosynthesis System Geometry Analysis Using Convolution-Based Blur-and-Add (BAA) Model.

    Science.gov (United States)

    Wu, Meng; Yoon, Sungwon; Solomon, Edward G; Star-Lack, Josh; Pelc, Norbert; Fahrig, Rebecca

    2016-01-01

    Digital tomosynthesis is a three-dimensional imaging technique with a lower radiation dose than computed tomography (CT). Due to the missing data in tomosynthesis systems, out-of-plane structures in the depth direction cannot be completely removed by the reconstruction algorithms. In this work, we analyzed the impulse responses of common tomosynthesis systems on a plane-to-plane basis and proposed a fast and accurate convolution-based blur-and-add (BAA) model to simulate the backprojected images. In addition, the analysis formalism describing the impulse response of out-of-plane structures can be generalized to both rotating and parallel gantries. We implemented a ray tracing forward projection and backprojection (ray-based model) algorithm and the convolution-based BAA model to simulate the shift-and-add (backproject) tomosynthesis reconstructions. The convolution-based BAA model with proper geometry distortion correction provides reasonably accurate estimates of the tomosynthesis reconstruction. A numerical comparison indicates that the simulated images using the two models differ by less than 6% in terms of the root-mean-squared error. This convolution-based BAA model can be used in efficient system geometry analysis, reconstruction algorithm design, out-of-plane artifacts suppression, and CT-tomosynthesis registration.

  10. Applying Gradient Descent in Convolutional Neural Networks

    Science.gov (United States)

    Cui, Nan

    2018-04-01

    With the development of the integrated circuit and computer science, people become caring more about solving practical issues via information technologies. Along with that, a new subject called Artificial Intelligent (AI) comes up. One popular research interest of AI is about recognition algorithm. In this paper, one of the most common algorithms, Convolutional Neural Networks (CNNs) will be introduced, for image recognition. Understanding its theory and structure is of great significance for every scholar who is interested in this field. Convolution Neural Network is an artificial neural network which combines the mathematical method of convolution and neural network. The hieratical structure of CNN provides it reliable computer speed and reasonable error rate. The most significant characteristics of CNNs are feature extraction, weight sharing and dimension reduction. Meanwhile, combining with the Back Propagation (BP) mechanism and the Gradient Descent (GD) method, CNNs has the ability to self-study and in-depth learning. Basically, BP provides an opportunity for backwardfeedback for enhancing reliability and GD is used for self-training process. This paper mainly discusses the CNN and the related BP and GD algorithms, including the basic structure and function of CNN, details of each layer, the principles and features of BP and GD, and some examples in practice with a summary in the end.

  11. Phylogenetic convolutional neural networks in metagenomics.

    Science.gov (United States)

    Fioravanti, Diego; Giarratano, Ylenia; Maggio, Valerio; Agostinelli, Claudio; Chierici, Marco; Jurman, Giuseppe; Furlanello, Cesare

    2018-03-08

    Convolutional Neural Networks can be effectively used only when data are endowed with an intrinsic concept of neighbourhood in the input space, as is the case of pixels in images. We introduce here Ph-CNN, a novel deep learning architecture for the classification of metagenomics data based on the Convolutional Neural Networks, with the patristic distance defined on the phylogenetic tree being used as the proximity measure. The patristic distance between variables is used together with a sparsified version of MultiDimensional Scaling to embed the phylogenetic tree in a Euclidean space. Ph-CNN is tested with a domain adaptation approach on synthetic data and on a metagenomics collection of gut microbiota of 38 healthy subjects and 222 Inflammatory Bowel Disease patients, divided in 6 subclasses. Classification performance is promising when compared to classical algorithms like Support Vector Machines and Random Forest and a baseline fully connected neural network, e.g. the Multi-Layer Perceptron. Ph-CNN represents a novel deep learning approach for the classification of metagenomics data. Operatively, the algorithm has been implemented as a custom Keras layer taking care of passing to the following convolutional layer not only the data but also the ranked list of neighbourhood of each sample, thus mimicking the case of image data, transparently to the user.

  12. Image quality assessment using deep convolutional networks

    Science.gov (United States)

    Li, Yezhou; Ye, Xiang; Li, Yong

    2017-12-01

    This paper proposes a method of accurately assessing image quality without a reference image by using a deep convolutional neural network. Existing training based methods usually utilize a compact set of linear filters for learning features of images captured by different sensors to assess their quality. These methods may not be able to learn the semantic features that are intimately related with the features used in human subject assessment. Observing this drawback, this work proposes training a deep convolutional neural network (CNN) with labelled images for image quality assessment. The ReLU in the CNN allows non-linear transformations for extracting high-level image features, providing a more reliable assessment of image quality than linear filters. To enable the neural network to take images of any arbitrary size as input, the spatial pyramid pooling (SPP) is introduced connecting the top convolutional layer and the fully-connected layer. In addition, the SPP makes the CNN robust to object deformations to a certain extent. The proposed method taking an image as input carries out an end-to-end learning process, and outputs the quality of the image. It is tested on public datasets. Experimental results show that it outperforms existing methods by a large margin and can accurately assess the image quality on images taken by different sensors of varying sizes.

  13. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: III. Convolution and deconvolution.

    Science.gov (United States)

    Langenbucher, Frieder

    2003-11-01

    Convolution and deconvolution are the classical in-vitro-in-vivo correlation tools to describe the relationship between input and weighting/response in a linear system, where input represents the drug release in vitro, weighting/response any body response in vivo. While functional treatment, e.g. in terms of polyexponential or Weibull distribution, is more appropriate for general survey or prediction, numerical algorithms are useful for treating actual experimental data. Deconvolution is not considered an algorithm by its own, but the inversion of a corresponding convolution. MS Excel is shown to be a useful tool for all these applications.

  14. Laguerre Gaussian beam multiplexing through turbulence

    CSIR Research Space (South Africa)

    Trichili, A

    2014-08-17

    Full Text Available We analyze the effect of atmospheric turbulence on the propagation of multiplexed Laguerre Gaussian modes. We present a method to multiplex Laguerre Gaussian modes using digital holograms and decompose the resulting field after encountering a...

  15. Generation of Quasi-Gaussian Pulses Based on Correlation Techniques

    Directory of Open Access Journals (Sweden)

    POHOATA, S.

    2012-02-01

    Full Text Available The Gaussian pulses have been mostly used within communications, where some applications can be emphasized: mobile telephony (GSM, where GMSK signals are used, as well as the UWB communications, where short-period pulses based on Gaussian waveform are generated. Since the Gaussian function signifies a theoretical concept, which cannot be accomplished from the physical point of view, this should be expressed by using various functions, able to determine physical implementations. New techniques of generating the Gaussian pulse responses of good precision are approached, proposed and researched in this paper. The second and third order derivatives with regard to the Gaussian pulse response are accurately generated. The third order derivates is composed of four individual rectangular pulses of fixed amplitudes, being easily to be generated by standard techniques. In order to generate pulses able to satisfy the spectral mask requirements, an adequate filter is necessary to be applied. This paper emphasizes a comparative analysis based on the relative error and the energy spectra of the proposed pulses.

  16. Analytic matrix elements with shifted correlated Gaussians

    DEFF Research Database (Denmark)

    Fedorov, D. V.

    2017-01-01

    Matrix elements between shifted correlated Gaussians of various potentials with several form-factors are calculated analytically. Analytic matrix elements are of importance for the correlated Gaussian method in quantum few-body physics.......Matrix elements between shifted correlated Gaussians of various potentials with several form-factors are calculated analytically. Analytic matrix elements are of importance for the correlated Gaussian method in quantum few-body physics....

  17. Gaussian statistics for palaeomagnetic vectors

    Science.gov (United States)

    Love, J.J.; Constable, C.G.

    2003-01-01

    With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimoda) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to

  18. Gaussian statistics for palaeomagnetic vectors

    Science.gov (United States)

    Love, J. J.; Constable, C. G.

    2003-03-01

    With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimodal) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to

  19. Reproducing kernel Hilbert spaces of Gaussian priors

    NARCIS (Netherlands)

    Vaart, van der A.W.; Zanten, van J.H.; Clarke, B.; Ghosal, S.

    2008-01-01

    We review definitions and properties of reproducing kernel Hilbert spaces attached to Gaussian variables and processes, with a view to applications in nonparametric Bayesian statistics using Gaussian priors. The rate of contraction of posterior distributions based on Gaussian priors can be described

  20. An Algorithm for the Convolution of Legendre Series

    KAUST Repository

    Hale, Nicholas; Townsend, Alex

    2014-01-01

    An O(N2) algorithm for the convolution of compactly supported Legendre series is described. The algorithm is derived from the convolution theorem for Legendre polynomials and the recurrence relation satisfied by spherical Bessel functions. Combining with previous work yields an O(N 2) algorithm for the convolution of Chebyshev series. Numerical results are presented to demonstrate the improved efficiency over the existing algorithm. © 2014 Society for Industrial and Applied Mathematics.

  1. The Urbanik generalized convolutions in the non-commutative ...

    Indian Academy of Sciences (India)

    −sν(dx) < ∞. Now we apply this construction to the Kendall convolution case, starting with the weakly stable measure δ1. Example 1. Let △ be the Kendall convolution, i.e. the generalized convolution with the probability kernel: δ1△δa = (1 − a)δ1 + aπ2 for a ∈ [0, 1] and π2 be the Pareto distribution with the density π2(dx) =.

  2. Inflation in random Gaussian landscapes

    Energy Technology Data Exchange (ETDEWEB)

    Masoumi, Ali; Vilenkin, Alexander; Yamada, Masaki, E-mail: ali@cosmos.phy.tufts.edu, E-mail: vilenkin@cosmos.phy.tufts.edu, E-mail: Masaki.Yamada@tufts.edu [Institute of Cosmology, Department of Physics and Astronomy, Tufts University, Medford, MA 02155 (United States)

    2017-05-01

    We develop analytic and numerical techniques for studying the statistics of slow-roll inflation in random Gaussian landscapes. As an illustration of these techniques, we analyze small-field inflation in a one-dimensional landscape. We calculate the probability distributions for the maximal number of e-folds and for the spectral index of density fluctuations n {sub s} and its running α {sub s} . These distributions have a universal form, insensitive to the correlation function of the Gaussian ensemble. We outline possible extensions of our methods to a large number of fields and to models of large-field inflation. These methods do not suffer from potential inconsistencies inherent in the Brownian motion technique, which has been used in most of the earlier treatments.

  3. On a Generalized Hankel Type Convolution of Generalized Functions

    Indian Academy of Sciences (India)

    Generalized Hankel type transformation; Parserval relation; generalized ... The classical generalized Hankel type convolution are defined and extended to a class of generalized functions. ... Proceedings – Mathematical Sciences | News.

  4. Enhanced online convolutional neural networks for object tracking

    Science.gov (United States)

    Zhang, Dengzhuo; Gao, Yun; Zhou, Hao; Li, Tianwen

    2018-04-01

    In recent several years, object tracking based on convolution neural network has gained more and more attention. The initialization and update of convolution filters can directly affect the precision of object tracking effective. In this paper, a novel object tracking via an enhanced online convolution neural network without offline training is proposed, which initializes the convolution filters by a k-means++ algorithm and updates the filters by an error back-propagation. The comparative experiments of 7 trackers on 15 challenging sequences showed that our tracker can perform better than other trackers in terms of AUC and precision.

  5. General Galilei Covariant Gaussian Maps

    Science.gov (United States)

    Gasbarri, Giulio; Toroš, Marko; Bassi, Angelo

    2017-09-01

    We characterize general non-Markovian Gaussian maps which are covariant under Galilean transformations. In particular, we consider translational and Galilean covariant maps and show that they reduce to the known Holevo result in the Markovian limit. We apply the results to discuss measures of macroscopicity based on classicalization maps, specifically addressing dissipation, Galilean covariance and non-Markovianity. We further suggest a possible generalization of the macroscopicity measure defined by Nimmrichter and Hornberger [Phys. Rev. Lett. 110, 16 (2013)].

  6. Gaussian Embeddings for Collaborative Filtering

    OpenAIRE

    Dos Santos , Ludovic; Piwowarski , Benjamin; Gallinari , Patrick

    2017-01-01

    International audience; Most collaborative ltering systems, such as matrix factorization, use vector representations for items and users. Those representations are deterministic, and do not allow modeling the uncertainty of the learned representation, which can be useful when a user has a small number of rated items (cold start), or when there is connict-ing information about the behavior of a user or the ratings of an item. In this paper, we leverage recent works in learning Gaussian embeddi...

  7. Rolling bearing fault feature learning using improved convolutional deep belief network with compressed sensing

    Science.gov (United States)

    Shao, Haidong; Jiang, Hongkai; Zhang, Haizhou; Duan, Wenjing; Liang, Tianchen; Wu, Shuaipeng

    2018-02-01

    The vibration signals collected from rolling bearing are usually complex and non-stationary with heavy background noise. Therefore, it is a great challenge to efficiently learn the representative fault features of the collected vibration signals. In this paper, a novel method called improved convolutional deep belief network (CDBN) with compressed sensing (CS) is developed for feature learning and fault diagnosis of rolling bearing. Firstly, CS is adopted for reducing the vibration data amount to improve analysis efficiency. Secondly, a new CDBN model is constructed with Gaussian visible units to enhance the feature learning ability for the compressed data. Finally, exponential moving average (EMA) technique is employed to improve the generalization performance of the constructed deep model. The developed method is applied to analyze the experimental rolling bearing vibration signals. The results confirm that the developed method is more effective than the traditional methods.

  8. Performance of BICM-T transceivers over Gaussian mixture noise channels

    KAUST Repository

    Malik, Muhammad Talha

    2014-04-01

    Experimental measurements have shown that the noise in many communication channels is non-Gaussian. Bit interleaved coded modulation (BICM) is very popular for spectrally efficient transmission. Recent results have shown that the performance of BICM using convolutional codes in non-fading channels can be significantly improved if the coded bits are not interleaved at all. This particular BICM design is called BICM trivial (BICM-T). In this paper, we analyze the performance of a generalized BICM-T design for communication over Gaussian mixture noise (GMN) channels. The results disclose that for an optimal bit error rate (BER) performance, the use of an interleaver in BICM for GMN channels depends upon the strength of the impulsive noise components in the Gaussian mixture. The results presented for 16-QAM show that the BICM-T can result in gains up to 1.5 dB for a target BER of 10-6 if the impulsive noise in the Gaussian mixture is below a certain threshold level. The simulation results verify the tightness of developed union bound (UB) on BER performance.

  9. Plane-wave decomposition by spherical-convolution microphone array

    Science.gov (United States)

    Rafaely, Boaz; Park, Munhum

    2004-05-01

    Reverberant sound fields are widely studied, as they have a significant influence on the acoustic performance of enclosures in a variety of applications. For example, the intelligibility of speech in lecture rooms, the quality of music in auditoria, the noise level in offices, and the production of 3D sound in living rooms are all affected by the enclosed sound field. These sound fields are typically studied through frequency response measurements or statistical measures such as reverberation time, which do not provide detailed spatial information. The aim of the work presented in this seminar is the detailed analysis of reverberant sound fields. A measurement and analysis system based on acoustic theory and signal processing, designed around a spherical microphone array, is presented. Detailed analysis is achieved by decomposition of the sound field into waves, using spherical Fourier transform and spherical convolution. The presentation will include theoretical review, simulation studies, and initial experimental results.

  10. Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.

    Science.gov (United States)

    Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus

    2017-01-01

    Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.

  11. Dispersion-convolution model for simulating peaks in a flow injection system.

    Science.gov (United States)

    Pai, Su-Cheng; Lai, Yee-Hwong; Chiao, Ling-Yun; Yu, Tiing

    2007-01-12

    A dispersion-convolution model is proposed for simulating peak shapes in a single-line flow injection system. It is based on the assumption that an injected sample plug is expanded due to a "bulk" dispersion mechanism along the length coordinate, and that after traveling over a distance or a period of time, the sample zone will develop into a Gaussian-like distribution. This spatial pattern is further transformed to a temporal coordinate by a convolution process, and finally a temporal peak image is generated. The feasibility of the proposed model has been examined by experiments with various coil lengths, sample sizes and pumping rates. An empirical dispersion coefficient (D*) can be estimated by using the observed peak position, height and area (tp*, h* and At*) from a recorder. An empirical temporal shift (Phi*) can be further approximated by Phi*=D*/u2, which becomes an important parameter in the restoration of experimental peaks. Also, the dispersion coefficient can be expressed as a second-order polynomial function of the pumping rate Q, for which D*(Q)=delta0+delta1Q+delta2Q2. The optimal dispersion occurs at a pumping rate of Qopt=sqrt[delta0/delta2]. This explains the interesting "Nike-swoosh" relationship between the peak height and pumping rate. The excellent coherence of theoretical and experimental peak shapes confirms that the temporal distortion effect is the dominating reason to explain the peak asymmetry in flow injection analysis.

  12. Multineuron spike train analysis with R-convolution linear combination kernel.

    Science.gov (United States)

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Convolutional neural networks and face recognition task

    Science.gov (United States)

    Sochenkova, A.; Sochenkov, I.; Makovetskii, A.; Vokhmintsev, A.; Melnikov, A.

    2017-09-01

    Computer vision tasks are remaining very important for the last couple of years. One of the most complicated problems in computer vision is face recognition that could be used in security systems to provide safety and to identify person among the others. There is a variety of different approaches to solve this task, but there is still no universal solution that would give adequate results in some cases. Current paper presents following approach. Firstly, we extract an area containing face, then we use Canny edge detector. On the next stage we use convolutional neural networks (CNN) to finally solve face recognition and person identification task.

  14. Decoding LDPC Convolutional Codes on Markov Channels

    Directory of Open Access Journals (Sweden)

    Kashyap Manohar

    2008-01-01

    Full Text Available Abstract This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.

  15. Decoding LDPC Convolutional Codes on Markov Channels

    Directory of Open Access Journals (Sweden)

    Chris Winstead

    2008-04-01

    Full Text Available This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.

  16. Fourier transforms and convolutions for the experimentalist

    CERN Document Server

    Jennison, RC

    1961-01-01

    Fourier Transforms and Convolutions for the Experimentalist provides the experimentalist with a guide to the principles and practical uses of the Fourier transformation. It aims to bridge the gap between the more abstract account of a purely mathematical approach and the rule of thumb calculation and intuition of the practical worker. The monograph springs from a lecture course which the author has given in recent years and for which he has drawn upon a number of sources, including a set of notes compiled by the late Dr. I. C. Browne from a series of lectures given by Mr. J . A. Ratcliffe of t

  17. Target recognition based on convolutional neural network

    Science.gov (United States)

    Wang, Liqiang; Wang, Xin; Xi, Fubiao; Dong, Jian

    2017-11-01

    One of the important part of object target recognition is the feature extraction, which can be classified into feature extraction and automatic feature extraction. The traditional neural network is one of the automatic feature extraction methods, while it causes high possibility of over-fitting due to the global connection. The deep learning algorithm used in this paper is a hierarchical automatic feature extraction method, trained with the layer-by-layer convolutional neural network (CNN), which can extract the features from lower layers to higher layers. The features are more discriminative and it is beneficial to the object target recognition.

  18. QCDNUM: Fast QCD evolution and convolution

    Science.gov (United States)

    Botje, M.

    2011-02-01

    The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline

  19. Convolutive ICA for Spatio-Temporal Analysis of EEG

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai

    2007-01-01

    in the convolutive model can be correctly detected using Bayesian model selection. We demonstrate a framework for deconvolving an EEG ICA subspace. Initial results suggest that in some cases convolutive mixing may be a more realistic model for EEG signals than the instantaneous ICA model....

  20. Modified Stieltjes Transform and Generalized Convolutions of Probability Distributions

    Directory of Open Access Journals (Sweden)

    Lev B. Klebanov

    2018-01-01

    Full Text Available The classical Stieltjes transform is modified in such a way as to generalize both Stieltjes and Fourier transforms. This transform allows the introduction of new classes of commutative and non-commutative generalized convolutions. A particular case of such a convolution for degenerate distributions appears to be the Wigner semicircle distribution.

  1. Nuclear norm regularized convolutional Max Pos@Top machine

    KAUST Repository

    Li, Qinfeng; Zhou, Xiaofeng; Gu, Aihua; Li, Zonghua; Liang, Ru-Ze

    2016-01-01

    , named as Pos@Top. Our proposed classification model has a convolutional structure that is composed by four layers, i.e., the convolutional layer, the activation layer, the max-pooling layer and the full connection layer. In this paper, we propose

  2. Spherical convolutions and their application in molecular modelling

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Frellsen, Jes

    2017-01-01

    Convolutional neural networks are increasingly used outside the domain of image analysis, in particular in various areas of the natural sciences concerned with spatial data. Such networks often work out-of-the box, and in some cases entire model architectures from image analysis can be carried over...... to other problem domains almost unaltered. Unfortunately, this convenience does not trivially extend to data in non-euclidean spaces, such as spherical data. In this paper, we introduce two strategies for conducting convolutions on the sphere, using either a spherical-polar grid or a grid based...... of spherical convolutions in the context of molecular modelling, by considering structural environments within proteins. We show that the models are capable of learning non-trivial functions in these molecular environments, and that our spherical convolutions generally outperform standard 3D convolutions...

  3. Detecting periodicities with Gaussian processes

    Directory of Open Access Journals (Sweden)

    Nicolas Durrande

    2016-04-01

    Full Text Available We consider the problem of detecting and quantifying the periodic component of a function given noise-corrupted observations of a limited number of input/output tuples. Our approach is based on Gaussian process regression, which provides a flexible non-parametric framework for modelling periodic data. We introduce a novel decomposition of the covariance function as the sum of periodic and aperiodic kernels. This decomposition allows for the creation of sub-models which capture the periodic nature of the signal and its complement. To quantify the periodicity of the signal, we derive a periodicity ratio which reflects the uncertainty in the fitted sub-models. Although the method can be applied to many kernels, we give a special emphasis to the Matérn family, from the expression of the reproducing kernel Hilbert space inner product to the implementation of the associated periodic kernels in a Gaussian process toolkit. The proposed method is illustrated by considering the detection of periodically expressed genes in the arabidopsis genome.

  4. Monogamy inequality for distributed gaussian entanglement.

    Science.gov (United States)

    Hiroshima, Tohya; Adesso, Gerardo; Illuminati, Fabrizio

    2007-02-02

    We show that for all n-mode Gaussian states of continuous variable systems, the entanglement shared among n parties exhibits the fundamental monogamy property. The monogamy inequality is proven by introducing the Gaussian tangle, an entanglement monotone under Gaussian local operations and classical communication, which is defined in terms of the squared negativity in complete analogy with the case of n-qubit systems. Our results elucidate the structure of quantum correlations in many-body harmonic lattice systems.

  5. Electroencephalography Based Fusion Two-Dimensional (2D-Convolution Neural Networks (CNN Model for Emotion Recognition System

    Directory of Open Access Journals (Sweden)

    Yea-Hoon Kwon

    2018-04-01

    Full Text Available The purpose of this study is to improve human emotional classification accuracy using a convolution neural networks (CNN model and to suggest an overall method to classify emotion based on multimodal data. We improved classification performance by combining electroencephalogram (EEG and galvanic skin response (GSR signals. GSR signals are preprocessed using by the zero-crossing rate. Sufficient EEG feature extraction can be obtained through CNN. Therefore, we propose a suitable CNN model for feature extraction by tuning hyper parameters in convolution filters. The EEG signal is preprocessed prior to convolution by a wavelet transform while considering time and frequency simultaneously. We use a database for emotion analysis using the physiological signals open dataset to verify the proposed process, achieving 73.4% accuracy, showing significant performance improvement over the current best practice models.

  6. Breaking Gaussian incompatibility on continuous variable quantum systems

    Energy Technology Data Exchange (ETDEWEB)

    Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Kiukas, Jukka, E-mail: jukka.kiukas@aber.ac.uk [Department of Mathematics, Aberystwyth University, Penglais, Aberystwyth, SY23 3BZ (United Kingdom); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy)

    2015-08-15

    We characterise Gaussian quantum channels that are Gaussian incompatibility breaking, that is, transform every set of Gaussian measurements into a set obtainable from a joint Gaussian observable via Gaussian postprocessing. Such channels represent local noise which renders measurements useless for Gaussian EPR-steering, providing the appropriate generalisation of entanglement breaking channels for this scenario. Understanding the structure of Gaussian incompatibility breaking channels contributes to the resource theory of noisy continuous variable quantum information protocols.

  7. Deformable image registration using convolutional neural networks

    Science.gov (United States)

    Eppenhof, Koen A. J.; Lafarge, Maxime W.; Moeskops, Pim; Veta, Mitko; Pluim, Josien P. W.

    2018-03-01

    Deformable image registration can be time-consuming and often needs extensive parameterization to perform well on a specific application. We present a step towards a registration framework based on a three-dimensional convolutional neural network. The network directly learns transformations between pairs of three-dimensional images. The outputs of the network are three maps for the x, y, and z components of a thin plate spline transformation grid. The network is trained on synthetic random transformations, which are applied to a small set of representative images for the desired application. Training therefore does not require manually annotated ground truth deformation information. The methodology is demonstrated on public data sets of inspiration-expiration lung CT image pairs, which come with annotated corresponding landmarks for evaluation of the registration accuracy. Advantages of this methodology are its fast registration times and its minimal parameterization.

  8. Codeword Structure Analysis for LDPC Convolutional Codes

    Directory of Open Access Journals (Sweden)

    Hua Zhou

    2015-12-01

    Full Text Available The codewords of a low-density parity-check (LDPC convolutional code (LDPC-CC are characterised into structured and non-structured. The number of the structured codewords is dominated by the size of the polynomial syndrome former matrix H T ( D , while the number of the non-structured ones depends on the particular monomials or polynomials in H T ( D . By evaluating the relationship of the codewords between the mother code and its super codes, the low weight non-structured codewords in the super codes can be eliminated by appropriately choosing the monomials or polynomials in H T ( D , resulting in improved distance spectrum of the mother code.

  9. A Novel Method for Generating Non-Stationary Gaussian Processes for Use in Digital Radar Simulators

    National Research Council Canada - National Science Library

    Boehm, James A; Debroux, Patrick S

    2007-01-01

    This report presents a novel and simple way to determine the transient response of the output of any linear system, described in the s-domain by an nth order polynomial, subjected to white Gaussian noise...

  10. Processing of chromatic information in a deep convolutional neural network.

    Science.gov (United States)

    Flachot, Alban; Gegenfurtner, Karl R

    2018-04-01

    Deep convolutional neural networks are a class of machine-learning algorithms capable of solving non-trivial tasks, such as object recognition, with human-like performance. Little is known about the exact computations that deep neural networks learn, and to what extent these computations are similar to the ones performed by the primate brain. Here, we investigate how color information is processed in the different layers of the AlexNet deep neural network, originally trained on object classification of over 1.2M images of objects in their natural contexts. We found that the color-responsive units in the first layer of AlexNet learned linear features and were broadly tuned to two directions in color space, analogously to what is known of color responsive cells in the primate thalamus. Moreover, these directions are decorrelated and lead to statistically efficient representations, similar to the cardinal directions of the second-stage color mechanisms in primates. We also found, in analogy to the early stages of the primate visual system, that chromatic and achromatic information were segregated in the early layers of the network. Units in the higher layers of AlexNet exhibit on average a lower responsivity for color than units at earlier stages.

  11. Fitting the Fractional Polynomial Model to Non-Gaussian Longitudinal Data

    Directory of Open Access Journals (Sweden)

    Ji Hoon Ryoo

    2017-08-01

    Full Text Available As in cross sectional studies, longitudinal studies involve non-Gaussian data such as binomial, Poisson, gamma, and inverse-Gaussian distributions, and multivariate exponential families. A number of statistical tools have thus been developed to deal with non-Gaussian longitudinal data, including analytic techniques to estimate parameters in both fixed and random effects models. However, as yet growth modeling with non-Gaussian data is somewhat limited when considering the transformed expectation of the response via a linear predictor as a functional form of explanatory variables. In this study, we introduce a fractional polynomial model (FPM that can be applied to model non-linear growth with non-Gaussian longitudinal data and demonstrate its use by fitting two empirical binary and count data models. The results clearly show the efficiency and flexibility of the FPM for such applications.

  12. When non-Gaussian states are Gaussian: Generalization of nonseparability criterion for continuous variables

    International Nuclear Information System (INIS)

    McHugh, Derek; Buzek, Vladimir; Ziman, Mario

    2006-01-01

    We present a class of non-Gaussian two-mode continuous-variable states for which the separability criterion for Gaussian states can be employed to detect whether they are separable or not. These states reduce to the two-mode Gaussian states as a special case

  13. Quantum beamstrahlung from gaussian bunches

    International Nuclear Information System (INIS)

    Chen, P.

    1987-08-01

    The method of Baier and Katkov is applied to calculate the correction terms to the Sokolov-Ternov radiation formula due to the variation of the magnetic field strength along the trajectory of a radiating particle. We carry the calculation up to the second order in the power expansion of B tau/B, where tau is the formation time of radiation. The expression is then used to estimate the quantum beamstrahlung average energy loss from e + e - bunches with gaussian distribution in bunch currents. We show that the effect of the field variation is to reduce the average energy loss from previous calculations based on the Sokolov-Ternov formula or its equivalent. Due to the limitation of our method, only an upper bound of the reduction is obtained. 18 refs

  14. An Improved Convolutional Neural Network on Crowd Density Estimation

    Directory of Open Access Journals (Sweden)

    Pan Shao-Yun

    2016-01-01

    Full Text Available In this paper, a new method is proposed for crowd density estimation. An improved convolutional neural network is combined with traditional texture feature. The data calculated by the convolutional layer can be treated as a new kind of features.So more useful information of images can be extracted by different features.In the meantime, the size of image has little effect on the result of convolutional neural network. Experimental results indicate that our scheme has adequate performance to allow for its use in real world applications.

  15. Limit theorems for functionals of Gaussian vectors

    Institute of Scientific and Technical Information of China (English)

    Hongshuai DAI; Guangjun SHEN; Lingtao KONG

    2017-01-01

    Operator self-similar processes,as an extension of self-similar processes,have been studied extensively.In this work,we study limit theorems for functionals of Gaussian vectors.Under some conditions,we determine that the limit of partial sums of functionals of a stationary Gaussian sequence of random vectors is an operator self-similar process.

  16. Palm distributions for log Gaussian Cox processes

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus

    This paper reviews useful results related to Palm distributions of spatial point processes and provides a new result regarding the characterization of Palm distributions for the class of log Gaussian Cox processes. This result is used to study functional summary statistics for a log Gaussian Cox...

  17. Gaussian limit of compact spin systems

    International Nuclear Information System (INIS)

    Bellissard, J.; Angelis, G.F. de

    1981-01-01

    It is shown that the Wilson and Wilson-Villain U(1) models reproduce, in the low coupling limit, the gaussian lattice approximation of the Euclidean electromagnetic field. By the same methods it is also possible to prove that the plane rotator and the Villain model share a common gaussian behaviour in the low temperature limit. (Auth.)

  18. On the dependence structure of Gaussian queues

    NARCIS (Netherlands)

    Es-Saghouani, A.; Mandjes, M.R.H.

    2009-01-01

    In this article we study Gaussian queues (that is, queues fed by Gaussian processes, such as fractional Brownian motion (fBm) and the integrated Ornstein-Uhlenbeck (iOU) process), with a focus on the dependence structure of the workload process. The main question is to what extent does the workload

  19. Shedding new light on Gaussian harmonic analysis

    NARCIS (Netherlands)

    Teuwen, J.J.B.

    2016-01-01

    This dissertation consists out of two rather disjoint parts. One part concerns some results on Gaussian harmonic analysis and the other on an optimization problem in optics. In the first part we study the Ornstein–Uhlenbeck process with respect to the Gaussian measure. We focus on two areas. One is

  20. Entanglement in Gaussian matrix-product states

    International Nuclear Information System (INIS)

    Adesso, Gerardo; Ericsson, Marie

    2006-01-01

    Gaussian matrix-product states are obtained as the outputs of projection operations from an ancillary space of M infinitely entangled bonds connecting neighboring sites, applied at each of N sites of a harmonic chain. Replacing the projections by associated Gaussian states, the building blocks, we show that the entanglement range in translationally invariant Gaussian matrix-product states depends on how entangled the building blocks are. In particular, infinite entanglement in the building blocks produces fully symmetric Gaussian states with maximum entanglement range. From their peculiar properties of entanglement sharing, a basic difference with spin chains is revealed: Gaussian matrix-product states can possess unlimited, long-range entanglement even with minimum number of ancillary bonds (M=1). Finally we discuss how these states can be experimentally engineered from N copies of a three-mode building block and N two-mode finitely squeezed states

  1. Gaussian vs non-Gaussian turbulence: impact on wind turbine loads

    DEFF Research Database (Denmark)

    Berg, Jacob; Natarajan, Anand; Mann, Jakob

    2016-01-01

    taking into account the safety factor for extreme moments. Other extreme load moments as well as the fatigue loads are not affected because of the use of non-Gaussian turbulent inflow. It is suggested that the turbine thus acts like a low-pass filter that averages out the non-Gaussian behaviour, which......From large-eddy simulations of atmospheric turbulence, a representation of Gaussian turbulence is constructed by randomizing the phases of the individual modes of variability. Time series of Gaussian turbulence are constructed and compared with its non-Gaussian counterpart. Time series from the two...

  2. High Order Tensor Formulation for Convolutional Sparse Coding

    KAUST Repository

    Bibi, Adel Aamer; Ghanem, Bernard

    2017-01-01

    Convolutional sparse coding (CSC) has gained attention for its successful role as a reconstruction and a classification tool in the computer vision and machine learning community. Current CSC methods can only reconstruct singlefeature 2D images

  3. Adversarial training and dilated convolutions for brain MRI segmentation

    NARCIS (Netherlands)

    Moeskops, P.; Veta, M.; Lafarge, M.W.; Eppenhof, K.A.J.; Pluim, J.P.W.

    2017-01-01

    Convolutional neural networks (CNNs) have been applied to various automatic image segmentation tasks in medical image analysis, including brain MRI segmentation. Generative adversarial networks have recently gained popularity because of their power in generating images that are difficult to

  4. Classification of urine sediment based on convolution neural network

    Science.gov (United States)

    Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian

    2018-04-01

    By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.

  5. Convolution of second order linear recursive sequences II.

    Directory of Open Access Journals (Sweden)

    Szakács Tamás

    2017-12-01

    Full Text Available We continue the investigation of convolutions of second order linear recursive sequences (see the first part in [1]. In this paper, we focus on the case when the characteristic polynomials of the sequences have common root.

  6. FPGA-based digital convolution for wireless applications

    CERN Document Server

    Guan, Lei

    2017-01-01

    This book presents essential perspectives on digital convolutions in wireless communications systems and illustrates their corresponding efficient real-time field-programmable gate array (FPGA) implementations. Covering these digital convolutions from basic concept to vivid simulation/illustration, the book is also supplemented with MS PowerPoint presentations to aid in comprehension. FPGAs or generic all programmable devices will soon become widespread, serving as the “brains” of all types of real-time smart signal processing systems, like smart networks, smart homes and smart cities. The book examines digital convolution by bringing together the following main elements: the fundamental theory behind the mathematical formulae together with corresponding physical phenomena; virtualized algorithm simulation together with benchmark real-time FPGA implementations; and detailed, state-of-the-art case studies on wireless applications, including popular linear convolution in digital front ends (DFEs); nonlinear...

  7. Deep Recurrent Convolutional Neural Network: Improving Performance For Speech Recognition

    OpenAIRE

    Zhang, Zewang; Sun, Zheng; Liu, Jiaqi; Chen, Jingwen; Huo, Zhao; Zhang, Xiao

    2016-01-01

    A deep learning approach has been widely applied in sequence modeling problems. In terms of automatic speech recognition (ASR), its performance has significantly been improved by increasing large speech corpus and deeper neural network. Especially, recurrent neural network and deep convolutional neural network have been applied in ASR successfully. Given the arising problem of training speed, we build a novel deep recurrent convolutional network for acoustic modeling and then apply deep resid...

  8. Traffic sign recognition with deep convolutional neural networks

    OpenAIRE

    Karamatić, Boris

    2016-01-01

    The problem of detection and recognition of traffic signs is becoming an important problem when it comes to the development of self driving cars and advanced driver assistance systems. In this thesis we will develop a system for detection and recognition of traffic signs. For the problem of detection we will use aggregate channel features and for the problem of recognition we will use a deep convolutional neural network. We will describe how convolutional neural networks work, how they are co...

  9. Convolutional Codes with Maximum Column Sum Rank for Network Streaming

    OpenAIRE

    Mahmood, Rafid; Badr, Ahmed; Khisti, Ashish

    2015-01-01

    The column Hamming distance of a convolutional code determines the error correction capability when streaming over a class of packet erasure channels. We introduce a metric known as the column sum rank, that parallels column Hamming distance when streaming over a network with link failures. We prove rank analogues of several known column Hamming distance properties and introduce a new family of convolutional codes that maximize the column sum rank up to the code memory. Our construction invol...

  10. Efficient and Invariant Convolutional Neural Networks for Dense Prediction

    OpenAIRE

    Gao, Hongyang; Ji, Shuiwang

    2017-01-01

    Convolutional neural networks have shown great success on feature extraction from raw input data such as images. Although convolutional neural networks are invariant to translations on the inputs, they are not invariant to other transformations, including rotation and flip. Recent attempts have been made to incorporate more invariance in image recognition applications, but they are not applicable to dense prediction tasks, such as image segmentation. In this paper, we propose a set of methods...

  11. Prediction of Electricity Usage Using Convolutional Neural Networks

    OpenAIRE

    Hansen, Martin

    2017-01-01

    Master's thesis Information- and communication technology IKT590 - University of Agder 2017 Convolutional Neural Networks are overwhelmingly accurate when attempting to predict numbers using the famous MNIST-dataset. In this paper, we are attempting to transcend these results for time- series forecasting, and compare them with several regression mod- els. The Convolutional Neural Network model predicted the same value through the entire time lapse in contrast with the other ...

  12. Research of convolutional neural networks for traffic sign recognition

    OpenAIRE

    Stadalnikas, Kasparas

    2017-01-01

    In this thesis the convolutional neural networks application for traffic sign recognition is analyzed. Thesis describes the basic operations, techniques that are commonly used to apply in the image classification using convolutional neural networks. Also, this paper describes the data sets used for traffic sign recognition, their problems affecting the final training results. The paper reviews most popular existing technologies – frameworks for developing the solution for traffic sign recogni...

  13. On the Fresnel sine integral and the convolution

    Directory of Open Access Journals (Sweden)

    Adem Kılıçman

    2003-01-01

    Full Text Available The Fresnel sine integral S(x, the Fresnel cosine integral C(x, and the associated functions S+(x, S−(x, C+(x, and C−(x are defined as locally summable functions on the real line. Some convolutions and neutrix convolutions of the Fresnel sine integral and its associated functions with x+r, xr are evaluated.

  14. Increasing Entanglement between Gaussian States by Coherent Photon Subtraction

    DEFF Research Database (Denmark)

    Ourjoumtsev, Alexei; Dantan, Aurelien Romain; Tualle Brouri, Rosa

    2007-01-01

    We experimentally demonstrate that the entanglement between Gaussian entangled states can be increased by non-Gaussian operations. Coherent subtraction of single photons from Gaussian quadrature-entangled light pulses, created by a nondegenerate parametric amplifier, produces delocalized states...

  15. Representation of Gaussian semimartingales with applications to the covariance function

    DEFF Research Database (Denmark)

    Basse-O'Connor, Andreas

    2010-01-01

    stationary Gaussian semimartingales and their canonical decomposition. Thirdly, we give a new characterization of the covariance function of Gaussian semimartingales, which enable us to characterize the class of martingales and the processes of bounded variation among the Gaussian semimartingales. We...

  16. Radial Structure Scaffolds Convolution Patterns of Developing Cerebral Cortex

    Directory of Open Access Journals (Sweden)

    Mir Jalil Razavi

    2017-08-01

    Full Text Available Commonly-preserved radial convolution is a prominent characteristic of the mammalian cerebral cortex. Endeavors from multiple disciplines have been devoted for decades to explore the causes for this enigmatic structure. However, the underlying mechanisms that lead to consistent cortical convolution patterns still remain poorly understood. In this work, inspired by prior studies, we propose and evaluate a plausible theory that radial convolution during the early development of the brain is sculptured by radial structures consisting of radial glial cells (RGCs and maturing axons. Specifically, the regionally heterogeneous development and distribution of RGCs controlled by Trnp1 regulate the convex and concave convolution patterns (gyri and sulci in the radial direction, while the interplay of RGCs' effects on convolution and axons regulates the convex (gyral convolution patterns. This theory is assessed by observations and measurements in literature from multiple disciplines such as neurobiology, genetics, biomechanics, etc., at multiple scales to date. Particularly, this theory is further validated by multimodal imaging data analysis and computational simulations in this study. We offer a versatile and descriptive study model that can provide reasonable explanations of observations, experiments, and simulations of the characteristic mammalian cortical folding.

  17. Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology.

    Science.gov (United States)

    Sharma, Harshita; Zerbe, Norman; Klempert, Iris; Hellwich, Olaf; Hufnagl, Peter

    2017-11-01

    Deep learning using convolutional neural networks is an actively emerging field in histological image analysis. This study explores deep learning methods for computer-aided classification in H&E stained histopathological whole slide images of gastric carcinoma. An introductory convolutional neural network architecture is proposed for two computerized applications, namely, cancer classification based on immunohistochemical response and necrosis detection based on the existence of tumor necrosis in the tissue. Classification performance of the developed deep learning approach is quantitatively compared with traditional image analysis methods in digital histopathology requiring prior computation of handcrafted features, such as statistical measures using gray level co-occurrence matrix, Gabor filter-bank responses, LBP histograms, gray histograms, HSV histograms and RGB histograms, followed by random forest machine learning. Additionally, the widely known AlexNet deep convolutional framework is comparatively analyzed for the corresponding classification problems. The proposed convolutional neural network architecture reports favorable results, with an overall classification accuracy of 0.6990 for cancer classification and 0.8144 for necrosis detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Metaheuristic Algorithms for Convolution Neural Network.

    Science.gov (United States)

    Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent).

  19. Metaheuristic Algorithms for Convolution Neural Network

    Directory of Open Access Journals (Sweden)

    L. M. Rasdi Rere

    2016-01-01

    Full Text Available A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN, a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent.

  20. Do Convolutional Neural Networks Learn Class Hierarchy?

    Science.gov (United States)

    Bilal, Alsallakh; Jourabloo, Amin; Ye, Mao; Liu, Xiaoming; Ren, Liu

    2018-01-01

    Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data.

  1. Microaneurysm detection using fully convolutional neural networks.

    Science.gov (United States)

    Chudzik, Piotr; Majumdar, Somshubra; Calivá, Francesco; Al-Diri, Bashir; Hunter, Andrew

    2018-05-01

    Diabetic retinopathy is a microvascular complication of diabetes that can lead to sight loss if treated not early enough. Microaneurysms are the earliest clinical signs of diabetic retinopathy. This paper presents an automatic method for detecting microaneurysms in fundus photographies. A novel patch-based fully convolutional neural network with batch normalization layers and Dice loss function is proposed. Compared to other methods that require up to five processing stages, it requires only three. Furthermore, to the best of the authors' knowledge, this is the first paper that shows how to successfully transfer knowledge between datasets in the microaneurysm detection domain. The proposed method was evaluated using three publicly available and widely used datasets: E-Ophtha, DIARETDB1, and ROC. It achieved better results than state-of-the-art methods using the FROC metric. The proposed algorithm accomplished highest sensitivities for low false positive rates, which is particularly important for screening purposes. Performance, simplicity, and robustness of the proposed method demonstrates its suitability for diabetic retinopathy screening applications. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Multiscale Convolutional Neural Networks for Hand Detection

    Directory of Open Access Journals (Sweden)

    Shiyang Yan

    2017-01-01

    Full Text Available Unconstrained hand detection in still images plays an important role in many hand-related vision problems, for example, hand tracking, gesture analysis, human action recognition and human-machine interaction, and sign language recognition. Although hand detection has been extensively studied for decades, it is still a challenging task with many problems to be tackled. The contributing factors for this complexity include heavy occlusion, low resolution, varying illumination conditions, different hand gestures, and the complex interactions between hands and objects or other hands. In this paper, we propose a multiscale deep learning model for unconstrained hand detection in still images. Deep learning models, and deep convolutional neural networks (CNNs in particular, have achieved state-of-the-art performances in many vision benchmarks. Developed from the region-based CNN (R-CNN model, we propose a hand detection scheme based on candidate regions generated by a generic region proposal algorithm, followed by multiscale information fusion from the popular VGG16 model. Two benchmark datasets were applied to validate the proposed method, namely, the Oxford Hand Detection Dataset and the VIVA Hand Detection Challenge. We achieved state-of-the-art results on the Oxford Hand Detection Dataset and had satisfactory performance in the VIVA Hand Detection Challenge.

  3. Some continual integrals from gaussian forms

    International Nuclear Information System (INIS)

    Mazmanishvili, A.S.

    1985-01-01

    The result summary of continual integration of gaussian functional type is given. The summary contains 124 continual integrals which are the mathematical expectation of the corresponding gaussian form by the continuum of random trajectories of four types: real-valued Ornstein-Uhlenbeck process, Wiener process, complex-valued Ornstein-Uhlenbeck process and the stochastic harmonic one. The summary includes both the known continual integrals and the unpublished before integrals. Mathematical results of the continual integration carried in the work may be applied in the problem of the theory of stochastic process, approaching to the finding of mean from gaussian forms by measures generated by the pointed stochastic processes

  4. Loop corrections to primordial non-Gaussianity

    Science.gov (United States)

    Boran, Sibel; Kahya, E. O.

    2018-02-01

    We discuss quantum gravitational loop effects to observable quantities such as curvature power spectrum and primordial non-Gaussianity of cosmic microwave background (CMB) radiation. We first review the previously shown case where one gets a time dependence for zeta-zeta correlator due to loop corrections. Then we investigate the effect of loop corrections to primordial non-Gaussianity of CMB. We conclude that, even with a single scalar inflaton, one might get a huge value for non-Gaussianity which would exceed the observed value by at least 30 orders of magnitude. Finally we discuss the consequences of this result for scalar driven inflationary models.

  5. Gaussian Mixture Model of Heart Rate Variability

    Science.gov (United States)

    Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario

    2012-01-01

    Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386

  6. Non-Gaussianity from isocurvature perturbations

    Energy Technology Data Exchange (ETDEWEB)

    Kawasaki, Masahiro; Nakayama, Kazunori; Sekiguchi, Toyokazu; Suyama, Teruaki [Institute for Cosmic Ray Research, University of Tokyo, Kashiwa 277-8582 (Japan); Takahashi, Fuminobu, E-mail: kawasaki@icrr.u-tokyo.ac.jp, E-mail: nakayama@icrr.u-tokyo.ac.jp, E-mail: sekiguti@icrr.u-tokyo.ac.jp, E-mail: suyama@icrr.u-tokyo.ac.jp, E-mail: fuminobu.takahashi@ipmu.jp [Institute for the Physics and Mathematics of the Universe, University of Tokyo, Kashiwa 277-8568 (Japan)

    2008-11-15

    We develop a formalism for studying non-Gaussianity in both curvature and isocurvature perturbations. It is shown that non-Gaussianity in the isocurvature perturbation between dark matter and photons leaves distinct signatures in the cosmic microwave background temperature fluctuations, which may be confirmed in future experiments, or possibly even in the currently available observational data. As an explicit example, we consider the quantum chromodynamics axion and show that it can actually induce sizable non-Gaussianity for the inflationary scale, H{sub inf} = O(10{sup 9}-10{sup 11}) GeV.

  7. Gaussian measures of entanglement versus negativities: Ordering of two-mode Gaussian states

    International Nuclear Information System (INIS)

    Adesso, Gerardo; Illuminati, Fabrizio

    2005-01-01

    We study the entanglement of general (pure or mixed) two-mode Gaussian states of continuous-variable systems by comparing the two available classes of computable measures of entanglement: entropy-inspired Gaussian convex-roof measures and positive partial transposition-inspired measures (negativity and logarithmic negativity). We first review the formalism of Gaussian measures of entanglement, adopting the framework introduced in M. M. Wolf et al., Phys. Rev. A 69, 052320 (2004), where the Gaussian entanglement of formation was defined. We compute explicitly Gaussian measures of entanglement for two important families of nonsymmetric two-mode Gaussian state: namely, the states of extremal (maximal and minimal) negativities at fixed global and local purities, introduced in G. Adesso et al., Phys. Rev. Lett. 92, 087901 (2004). This analysis allows us to compare the different orderings induced on the set of entangled two-mode Gaussian states by the negativities and by the Gaussian measures of entanglement. We find that in a certain range of values of the global and local purities (characterizing the covariance matrix of the corresponding extremal states), states of minimum negativity can have more Gaussian entanglement of formation than states of maximum negativity. Consequently, Gaussian measures and negativities are definitely inequivalent measures of entanglement on nonsymmetric two-mode Gaussian states, even when restricted to a class of extremal states. On the other hand, the two families of entanglement measures are completely equivalent on symmetric states, for which the Gaussian entanglement of formation coincides with the true entanglement of formation. Finally, we show that the inequivalence between the two families of continuous-variable entanglement measures is somehow limited. Namely, we rigorously prove that, at fixed negativities, the Gaussian measures of entanglement are bounded from below. Moreover, we provide some strong evidence suggesting that they

  8. Real-time hybrid simulation using the convolution integral method

    International Nuclear Information System (INIS)

    Kim, Sung Jig; Christenson, Richard E; Wojtkiewicz, Steven F; Johnson, Erik A

    2011-01-01

    This paper proposes a real-time hybrid simulation method that will allow complex systems to be tested within the hybrid test framework by employing the convolution integral (CI) method. The proposed CI method is potentially transformative for real-time hybrid simulation. The CI method can allow real-time hybrid simulation to be conducted regardless of the size and complexity of the numerical model and for numerical stability to be ensured in the presence of high frequency responses in the simulation. This paper presents the general theory behind the proposed CI method and provides experimental verification of the proposed method by comparing the CI method to the current integration time-stepping (ITS) method. Real-time hybrid simulation is conducted in the Advanced Hazard Mitigation Laboratory at the University of Connecticut. A seismically excited two-story shear frame building with a magneto-rheological (MR) fluid damper is selected as the test structure to experimentally validate the proposed method. The building structure is numerically modeled and simulated, while the MR damper is physically tested. Real-time hybrid simulation using the proposed CI method is shown to provide accurate results

  9. Non-Gaussian information from weak lensing data via deep learning

    Science.gov (United States)

    Gupta, Arushi; Matilla, José Manuel Zorrilla; Hsu, Daniel; Haiman, Zoltán

    2018-05-01

    Weak lensing maps contain information beyond two-point statistics on small scales. Much recent work has tried to extract this information through a range of different observables or via nonlinear transformations of the lensing field. Here we train and apply a two-dimensional convolutional neural network to simulated noiseless lensing maps covering 96 different cosmological models over a range of {Ωm,σ8} . Using the area of the confidence contour in the {Ωm,σ8} plane as a figure of merit, derived from simulated convergence maps smoothed on a scale of 1.0 arcmin, we show that the neural network yields ≈5 × tighter constraints than the power spectrum, and ≈4 × tighter than the lensing peaks. Such gains illustrate the extent to which weak lensing data encode cosmological information not accessible to the power spectrum or even other, non-Gaussian statistics such as lensing peaks.

  10. Matching of Remote Sensing Images with Complex Background Variations via Siamese Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Haiqing He

    2018-02-01

    Full Text Available Feature-based matching methods have been widely used in remote sensing image matching given their capability to achieve excellent performance despite image geometric and radiometric distortions. However, most of the feature-based methods are unreliable for complex background variations, because the gradient or other image grayscale information used to construct the feature descriptor is sensitive to image background variations. Recently, deep learning-based methods have been proven suitable for high-level feature representation and comparison in image matching. Inspired by the progresses made in deep learning, a new technical framework for remote sensing image matching based on the Siamese convolutional neural network is presented in this paper. First, a Siamese-type network architecture is designed to simultaneously learn the features and the corresponding similarity metric from labeled training examples of matching and non-matching true-color patch pairs. In the proposed network, two streams of convolutional and pooling layers sharing identical weights are arranged without the manually designed features. The number of convolutional layers is determined based on the factors that affect image matching. The sigmoid function is employed to compute the matching and non-matching probabilities in the output layer. Second, a gridding sub-pixel Harris algorithm is used to obtain the accurate localization of candidate matches. Third, a Gaussian pyramid coupling quadtree is adopted to gradually narrow down the searching space of the candidate matches, and multiscale patches are compared synchronously. Subsequently, a similarity measure based on the output of the sigmoid is adopted to find the initial matches. Finally, the random sample consensus algorithm and the whole-to-local quadratic polynomial constraints are used to remove false matches. In the experiments, different types of satellite datasets, such as ZY3, GF1, IKONOS, and Google Earth images

  11. FEATURE DESCRIPTOR BY CONVOLUTION AND POOLING AUTOENCODERS

    Directory of Open Access Journals (Sweden)

    L. Chen

    2015-03-01

    Full Text Available In this paper we present several descriptors for feature-based matching based on autoencoders, and we evaluate the performance of these descriptors. In a training phase, we learn autoencoders from image patches extracted in local windows surrounding key points determined by the Difference of Gaussian extractor. In the matching phase, we construct key point descriptors based on the learned autoencoders, and we use these descriptors as the basis for local keypoint descriptor matching. Three types of descriptors based on autoencoders are presented. To evaluate the performance of these descriptors, recall and 1-precision curves are generated for different kinds of transformations, e.g. zoom and rotation, viewpoint change, using a standard benchmark data set. We compare the performance of these descriptors with the one achieved for SIFT. Early results presented in this paper show that, whereas SIFT in general performs better than the new descriptors, the descriptors based on autoencoders show some potential for feature based matching.

  12. Optimal unitary dilation for bosonic Gaussian channels

    International Nuclear Information System (INIS)

    Caruso, Filippo; Eisert, Jens; Giovannetti, Vittorio; Holevo, Alexander S.

    2011-01-01

    A general quantum channel can be represented in terms of a unitary interaction between the information-carrying system and a noisy environment. In this paper the minimal number of quantum Gaussian environmental modes required to provide a unitary dilation of a multimode bosonic Gaussian channel is analyzed for both pure and mixed environments. We compute this quantity in the case of pure environment corresponding to the Stinespring representation and give an improved estimate in the case of mixed environment. The computations rely, on one hand, on the properties of the generalized Choi-Jamiolkowski state and, on the other hand, on an explicit construction of the minimal dilation for arbitrary bosonic Gaussian channel. These results introduce a new quantity reflecting ''noisiness'' of bosonic Gaussian channels and can be applied to address some issues concerning transmission of information in continuous variables systems.

  13. Phase statistics in non-Gaussian scattering

    International Nuclear Information System (INIS)

    Watson, Stephen M; Jakeman, Eric; Ridley, Kevin D

    2006-01-01

    Amplitude weighting can improve the accuracy of frequency measurements in signals corrupted by multiplicative speckle noise. When the speckle field constitutes a circular complex Gaussian process, the optimal function of amplitude weighting is provided by the field intensity, corresponding to the intensity-weighted phase derivative statistic. In this paper, we investigate the phase derivative and intensity-weighted phase derivative returned from a two-dimensional random walk, which constitutes a generic scattering model capable of producing both Gaussian and non-Gaussian fluctuations. Analytical results are developed for the correlation properties of the intensity-weighted phase derivative, as well as limiting probability densities of the scattered field. Numerical simulation is used to generate further probability densities and determine optimal weighting criteria from non-Gaussian fields. The results are relevant to frequency retrieval in radiation scattered from random media

  14. Galaxy bias and primordial non-Gaussianity

    Energy Technology Data Exchange (ETDEWEB)

    Assassi, Valentin; Baumann, Daniel [DAMTP, Cambridge University, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Schmidt, Fabian, E-mail: assassi@ias.edu, E-mail: D.D.Baumann@uva.nl, E-mail: fabians@MPA-Garching.MPG.DE [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85748 Garching (Germany)

    2015-12-01

    We present a systematic study of galaxy biasing in the presence of primordial non-Gaussianity. For a large class of non-Gaussian initial conditions, we define a general bias expansion and prove that it is closed under renormalization, thereby showing that the basis of operators in the expansion is complete. We then study the effects of primordial non-Gaussianity on the statistics of galaxies. We show that the equivalence principle enforces a relation between the scale-dependent bias in the galaxy power spectrum and that in the dipolar part of the bispectrum. This provides a powerful consistency check to confirm the primordial origin of any observed scale-dependent bias. Finally, we also discuss the imprints of anisotropic non-Gaussianity as motivated by recent studies of higher-spin fields during inflation.

  15. Optimal cloning of mixed Gaussian states

    International Nuclear Information System (INIS)

    Guta, Madalin; Matsumoto, Keiji

    2006-01-01

    We construct the optimal one to two cloning transformation for the family of displaced thermal equilibrium states of a harmonic oscillator, with a fixed and known temperature. The transformation is Gaussian and it is optimal with respect to the figure of merit based on the joint output state and norm distance. The proof of the result is based on the equivalence between the optimal cloning problem and that of optimal amplification of Gaussian states which is then reduced to an optimization problem for diagonal states of a quantum oscillator. A key concept in finding the optimum is that of stochastic ordering which plays a similar role in the purely classical problem of Gaussian cloning. The result is then extended to the case of n to m cloning of mixed Gaussian states

  16. Encoding information using laguerre gaussian modes

    CSIR Research Space (South Africa)

    Trichili, A

    2015-08-01

    Full Text Available The authors experimentally demonstrate an information encoding protocol using the two degrees of freedom of Laguerre Gaussian modes having different radial and azimuthal components. A novel method, based on digital holography, for information...

  17. Interweave Cognitive Radio with Improper Gaussian Signaling

    KAUST Repository

    Hedhly, Wafa; Amin, Osama; Alouini, Mohamed-Slim

    2018-01-01

    Improper Gaussian signaling (IGS) has proven its ability in improving the performance of underlay and overlay cognitive radio paradigms. In this paper, the interweave cognitive radio paradigm is studied when the cognitive user employs IGS

  18. Galaxy bias and primordial non-Gaussianity

    International Nuclear Information System (INIS)

    Assassi, Valentin; Baumann, Daniel; Schmidt, Fabian

    2015-01-01

    We present a systematic study of galaxy biasing in the presence of primordial non-Gaussianity. For a large class of non-Gaussian initial conditions, we define a general bias expansion and prove that it is closed under renormalization, thereby showing that the basis of operators in the expansion is complete. We then study the effects of primordial non-Gaussianity on the statistics of galaxies. We show that the equivalence principle enforces a relation between the scale-dependent bias in the galaxy power spectrum and that in the dipolar part of the bispectrum. This provides a powerful consistency check to confirm the primordial origin of any observed scale-dependent bias. Finally, we also discuss the imprints of anisotropic non-Gaussianity as motivated by recent studies of higher-spin fields during inflation

  19. Statistically tuned Gaussian background subtraction technique for ...

    Indian Academy of Sciences (India)

    temporal median method and mixture of Gaussian model and performance evaluation ... to process the videos captured by unmanned aerial vehicle (UAV). ..... The output is obtained by simulation using MATLAB 2010 in a standalone PC with ...

  20. A non-Gaussian multivariate distribution with all lower-dimensional Gaussians and related families

    KAUST Repository

    Dutta, Subhajit

    2014-07-28

    Several fascinating examples of non-Gaussian bivariate distributions which have marginal distribution functions to be Gaussian have been proposed in the literature. These examples often clarify several properties associated with the normal distribution. In this paper, we generalize this result in the sense that we construct a pp-dimensional distribution for which any proper subset of its components has the Gaussian distribution. However, the jointpp-dimensional distribution is inconsistent with the distribution of these subsets because it is not Gaussian. We study the probabilistic properties of this non-Gaussian multivariate distribution in detail. Interestingly, several popular tests of multivariate normality fail to identify this pp-dimensional distribution as non-Gaussian. We further extend our construction to a class of elliptically contoured distributions as well as skewed distributions arising from selections, for instance the multivariate skew-normal distribution.

  1. A non-Gaussian multivariate distribution with all lower-dimensional Gaussians and related families

    KAUST Repository

    Dutta, Subhajit; Genton, Marc G.

    2014-01-01

    Several fascinating examples of non-Gaussian bivariate distributions which have marginal distribution functions to be Gaussian have been proposed in the literature. These examples often clarify several properties associated with the normal distribution. In this paper, we generalize this result in the sense that we construct a pp-dimensional distribution for which any proper subset of its components has the Gaussian distribution. However, the jointpp-dimensional distribution is inconsistent with the distribution of these subsets because it is not Gaussian. We study the probabilistic properties of this non-Gaussian multivariate distribution in detail. Interestingly, several popular tests of multivariate normality fail to identify this pp-dimensional distribution as non-Gaussian. We further extend our construction to a class of elliptically contoured distributions as well as skewed distributions arising from selections, for instance the multivariate skew-normal distribution.

  2. A Decentralized Receiver in Gaussian Interference

    Directory of Open Access Journals (Sweden)

    Christian D. Chapman

    2018-04-01

    Full Text Available Bounds are developed on the maximum communications rate between a transmitter and a fusion node aided by a cluster of distributed receivers with limited resources for cooperation, all in the presence of an additive Gaussian interferer. The receivers cannot communicate with one another and can only convey processed versions of their observations to the fusion center through a Local Array Network (LAN with limited total throughput. The effectiveness of each bound’s approach for mitigating a strong interferer is assessed over a wide range of channels. It is seen that, if resources are shared effectively, even a simple quantize-and-forward strategy can mitigate an interferer 20 dB stronger than the signal in a diverse range of spatially Ricean channels. Monte-Carlo experiments for the bounds reveal that, while achievable rates are stable when varying the receiver’s observed scattered-path to line-of-sight signal power, the receivers must adapt how they share resources in response to this change. The bounds analyzed are proven to be achievable and are seen to be tight with capacity when LAN resources are either ample or limited.

  3. A novel construction of complex-valued Gaussian processes with arbitrary spectral densities and its application to excitation energy transfer.

    Science.gov (United States)

    Chen, Xin; Cao, Jianshu; Silbey, Robert J

    2013-06-14

    The recent experimental discoveries about excitation energy transfer (EET) in light harvesting antenna (LHA) attract a lot of interest. As an open non-equilibrium quantum system, the EET demands more rigorous theoretical framework to understand the interaction between system and environment and therein the evolution of reduced density matrix. A phonon is often used to model the fluctuating environment and convolutes the reduced quantum system temporarily. In this paper, we propose a novel way to construct complex-valued Gaussian processes to describe thermal quantum phonon bath exactly by converting the convolution of influence functional into the time correlation of complex Gaussian random field. Based on the construction, we propose a rigorous and efficient computational method, the covariance decomposition and conditional propagation scheme, to simulate the temporarily entangled reduced system. The new method allows us to study the non-Markovian effect without perturbation under the influence of different spectral densities of the linear system-phonon coupling coefficients. Its application in the study of EET in the Fenna-Matthews-Olson model Hamiltonian under four different spectral densities is discussed. Since the scaling of our algorithm is linear due to its Monte Carlo nature, the future application of the method for large LHA systems is attractive. In addition, this method can be used to study the effect of correlated initial condition on the reduced dynamics in the future.

  4. Gaussian sum rules for optical functions

    International Nuclear Information System (INIS)

    Kimel, I.

    1981-12-01

    A new (Gaussian) type of sum rules (GSR) for several optical functions, is presented. The functions considered are: dielectric permeability, refractive index, energy loss function, rotatory power and ellipticity (circular dichroism). While reducing to the usual type of sum rules in a certain limit, the GSR contain in general, a Gaussian factor that serves to improve convergence. GSR might be useful in analysing experimental data. (Author) [pt

  5. Gaussian maximally multipartite-entangled states

    Science.gov (United States)

    Facchi, Paolo; Florio, Giuseppe; Lupo, Cosmo; Mancini, Stefano; Pascazio, Saverio

    2009-12-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7 .

  6. Gaussian maximally multipartite-entangled states

    International Nuclear Information System (INIS)

    Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio; Lupo, Cosmo; Mancini, Stefano

    2009-01-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7.

  7. Non-Gaussian halo assembly bias

    International Nuclear Information System (INIS)

    Reid, Beth A.; Verde, Licia; Dolag, Klaus; Matarrese, Sabino; Moscardini, Lauro

    2010-01-01

    The strong dependence of the large-scale dark matter halo bias on the (local) non-Gaussianity parameter, f NL , offers a promising avenue towards constraining primordial non-Gaussianity with large-scale structure surveys. In this paper, we present the first detection of the dependence of the non-Gaussian halo bias on halo formation history using N-body simulations. We also present an analytic derivation of the expected signal based on the extended Press-Schechter formalism. In excellent agreement with our analytic prediction, we find that the halo formation history-dependent contribution to the non-Gaussian halo bias (which we call non-Gaussian halo assembly bias) can be factorized in a form approximately independent of redshift and halo mass. The correction to the non-Gaussian halo bias due to the halo formation history can be as large as 100%, with a suppression of the signal for recently formed halos and enhancement for old halos. This could in principle be a problem for realistic galaxy surveys if observational selection effects were to pick galaxies occupying only recently formed halos. Current semi-analytic galaxy formation models, for example, imply an enhancement in the expected signal of ∼ 23% and ∼ 48% for galaxies at z = 1 selected by stellar mass and star formation rate, respectively

  8. Adaptive Laguerre-Gaussian variant of the Gaussian beam expansion method.

    Science.gov (United States)

    Cagniot, Emmanuel; Fromager, Michael; Ait-Ameur, Kamel

    2009-11-01

    A variant of the Gaussian beam expansion method consists in expanding the Bessel function J0 appearing in the Fresnel-Kirchhoff integral into a finite sum of complex Gaussian functions to derive an analytical expression for a Laguerre-Gaussian beam diffracted through a hard-edge aperture. However, the validity range of the approximation depends on the number of expansion coefficients that are obtained by optimization-computation directly. We propose another solution consisting in expanding J0 onto a set of collimated Laguerre-Gaussian functions whose waist depends on their number and then, depending on its argument, predicting the suitable number of expansion functions to calculate the integral recursively.

  9. Lidar Cloud Detection with Fully Convolutional Networks

    Science.gov (United States)

    Cromwell, E.; Flynn, D.

    2017-12-01

    The vertical distribution of clouds from active remote sensing instrumentation is a widely used data product from global atmospheric measuring sites. The presence of clouds can be expressed as a binary cloud mask and is a primary input for climate modeling efforts and cloud formation studies. Current cloud detection algorithms producing these masks do not accurately identify the cloud boundaries and tend to oversample or over-represent the cloud. This translates as uncertainty for assessing the radiative impact of clouds and tracking changes in cloud climatologies. The Atmospheric Radiation Measurement (ARM) program has over 20 years of micro-pulse lidar (MPL) and High Spectral Resolution Lidar (HSRL) instrument data and companion automated cloud mask product at the mid-latitude Southern Great Plains (SGP) and the polar North Slope of Alaska (NSA) atmospheric observatory. Using this data, we train a fully convolutional network (FCN) with semi-supervised learning to segment lidar imagery into geometric time-height cloud locations for the SGP site and MPL instrument. We then use transfer learning to train a FCN for (1) the MPL instrument at the NSA site and (2) for the HSRL. In our semi-supervised approach, we pre-train the classification layers of the FCN with weakly labeled lidar data. Then, we facilitate end-to-end unsupervised pre-training and transition to fully supervised learning with ground truth labeled data. Our goal is to improve the cloud mask accuracy and precision for the MPL instrument to 95% and 80%, respectively, compared to the current cloud mask algorithms of 89% and 50%. For the transfer learning based FCN for the HSRL instrument, our goal is to achieve a cloud mask accuracy of 90% and a precision of 80%.

  10. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    Science.gov (United States)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  11. Detecting atrial fibrillation by deep convolutional neural networks.

    Science.gov (United States)

    Xia, Yong; Wulan, Naren; Wang, Kuanquan; Zhang, Henggui

    2018-02-01

    Atrial fibrillation (AF) is the most common cardiac arrhythmia. The incidence of AF increases with age, causing high risks of stroke and increased morbidity and mortality. Efficient and accurate diagnosis of AF based on the ECG is valuable in clinical settings and remains challenging. In this paper, we proposed a novel method with high reliability and accuracy for AF detection via deep learning. The short-term Fourier transform (STFT) and stationary wavelet transform (SWT) were used to analyze ECG segments to obtain two-dimensional (2-D) matrix input suitable for deep convolutional neural networks. Then, two different deep convolutional neural network models corresponding to STFT output and SWT output were developed. Our new method did not require detection of P or R peaks, nor feature designs for classification, in contrast to existing algorithms. Finally, the performances of the two models were evaluated and compared with those of existing algorithms. Our proposed method demonstrated favorable performances on ECG segments as short as 5 s. The deep convolutional neural network using input generated by STFT, presented a sensitivity of 98.34%, specificity of 98.24% and accuracy of 98.29%. For the deep convolutional neural network using input generated by SWT, a sensitivity of 98.79%, specificity of 97.87% and accuracy of 98.63% was achieved. The proposed method using deep convolutional neural networks shows high sensitivity, specificity and accuracy, and, therefore, is a valuable tool for AF detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.

    Science.gov (United States)

    Huang, Yan; Wang, Wei; Wang, Liang

    2018-04-01

    Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.

  13. New gaussian points for the solution of first order ordinary ...

    African Journals Online (AJOL)

    Numerical experiments carried out using the new Gaussian points revealed there efficiency on stiff differential equations. The results also reveal that methods using the new Gaussian points are more accurate than those using the standard Gaussian points on non-stiff initial value problems. Keywords: Gaussian points ...

  14. Calculations of Sobol indices for the Gaussian process metamodel

    Energy Technology Data Exchange (ETDEWEB)

    Marrel, Amandine [CEA, DEN, DTN/SMTM/LMTE, F-13108 Saint Paul lez Durance (France)], E-mail: amandine.marrel@cea.fr; Iooss, Bertrand [CEA, DEN, DER/SESI/LCFR, F-13108 Saint Paul lez Durance (France); Laurent, Beatrice [Institut de Mathematiques, Universite de Toulouse (UMR 5219) (France); Roustant, Olivier [Ecole des Mines de Saint-Etienne (France)

    2009-03-15

    Global sensitivity analysis of complex numerical models can be performed by calculating variance-based importance measures of the input variables, such as the Sobol indices. However, these techniques, requiring a large number of model evaluations, are often unacceptable for time expensive computer codes. A well-known and widely used decision consists in replacing the computer code by a metamodel, predicting the model responses with a negligible computation time and rending straightforward the estimation of Sobol indices. In this paper, we discuss about the Gaussian process model which gives analytical expressions of Sobol indices. Two approaches are studied to compute the Sobol indices: the first based on the predictor of the Gaussian process model and the second based on the global stochastic process model. Comparisons between the two estimates, made on analytical examples, show the superiority of the second approach in terms of convergence and robustness. Moreover, the second approach allows to integrate the modeling error of the Gaussian process model by directly giving some confidence intervals on the Sobol indices. These techniques are finally applied to a real case of hydrogeological modeling.

  15. Global sensitivity analysis using a Gaussian Radial Basis Function metamodel

    International Nuclear Information System (INIS)

    Wu, Zeping; Wang, Donghui; Okolo N, Patrick; Hu, Fan; Zhang, Weihua

    2016-01-01

    Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on response variables. Amongst the wide range of documented studies on sensitivity measures and analysis, Sobol' indices have received greater portion of attention due to the fact that they can provide accurate information for most models. In this paper, a novel analytical expression to compute the Sobol' indices is derived by introducing a method which uses the Gaussian Radial Basis Function to build metamodels of computationally expensive computer codes. Performance of the proposed method is validated against various analytical functions and also a structural simulation scenario. Results demonstrate that the proposed method is an efficient approach, requiring a computational cost of one to two orders of magnitude less when compared to the traditional Quasi Monte Carlo-based evaluation of Sobol' indices. - Highlights: • RBF based sensitivity analysis method is proposed. • Sobol' decomposition of Gaussian RBF metamodel is obtained. • Sobol' indices of Gaussian RBF metamodel are derived based on the decomposition. • The efficiency of proposed method is validated by some numerical examples.

  16. Calculations of Sobol indices for the Gaussian process metamodel

    International Nuclear Information System (INIS)

    Marrel, Amandine; Iooss, Bertrand; Laurent, Beatrice; Roustant, Olivier

    2009-01-01

    Global sensitivity analysis of complex numerical models can be performed by calculating variance-based importance measures of the input variables, such as the Sobol indices. However, these techniques, requiring a large number of model evaluations, are often unacceptable for time expensive computer codes. A well-known and widely used decision consists in replacing the computer code by a metamodel, predicting the model responses with a negligible computation time and rending straightforward the estimation of Sobol indices. In this paper, we discuss about the Gaussian process model which gives analytical expressions of Sobol indices. Two approaches are studied to compute the Sobol indices: the first based on the predictor of the Gaussian process model and the second based on the global stochastic process model. Comparisons between the two estimates, made on analytical examples, show the superiority of the second approach in terms of convergence and robustness. Moreover, the second approach allows to integrate the modeling error of the Gaussian process model by directly giving some confidence intervals on the Sobol indices. These techniques are finally applied to a real case of hydrogeological modeling

  17. Invariant moments based convolutional neural networks for image analysis

    Directory of Open Access Journals (Sweden)

    Vijayalakshmi G.V. Mahesh

    2017-01-01

    Full Text Available The paper proposes a method using convolutional neural network to effectively evaluate the discrimination between face and non face patterns, gender classification using facial images and facial expression recognition. The novelty of the method lies in the utilization of the initial trainable convolution kernels coefficients derived from the zernike moments by varying the moment order. The performance of the proposed method was compared with the convolutional neural network architecture that used random kernels as initial training parameters. The multilevel configuration of zernike moments was significant in extracting the shape information suitable for hierarchical feature learning to carry out image analysis and classification. Furthermore the results showed an outstanding performance of zernike moment based kernels in terms of the computation time and classification accuracy.

  18. Single image super-resolution based on convolutional neural networks

    Science.gov (United States)

    Zou, Lamei; Luo, Ming; Yang, Weidong; Li, Peng; Jin, Liujia

    2018-03-01

    We present a deep learning method for single image super-resolution (SISR). The proposed approach learns end-to-end mapping between low-resolution (LR) images and high-resolution (HR) images. The mapping is represented as a deep convolutional neural network which inputs the LR image and outputs the HR image. Our network uses 5 convolution layers, which kernels size include 5×5, 3×3 and 1×1. In our proposed network, we use residual-learning and combine different sizes of convolution kernels at the same layer. The experiment results show that our proposed method performs better than the existing methods in reconstructing quality index and human visual effects on benchmarked images.

  19. Vibration analysis of FG cylindrical shells with power-law index using discrete singular convolution technique

    Science.gov (United States)

    Mercan, Kadir; Demir, Çiǧdem; Civalek, Ömer

    2016-01-01

    In the present manuscript, free vibration response of circular cylindrical shells with functionally graded material (FGM) is investigated. The method of discrete singular convolution (DSC) is used for numerical solution of the related governing equation of motion of FGM cylindrical shell. The constitutive relations are based on the Love's first approximation shell theory. The material properties are graded in the thickness direction according to a volume fraction power law indexes. Frequency values are calculated for different types of boundary conditions, material and geometric parameters. In general, close agreement between the obtained results and those of other researchers has been found.

  20. Semiparametric inference on the fractal index of Gaussian and conditionally Gaussian time series data

    DEFF Research Database (Denmark)

    Bennedsen, Mikkel

    Using theory on (conditionally) Gaussian processes with stationary increments developed in Barndorff-Nielsen et al. (2009, 2011), this paper presents a general semiparametric approach to conducting inference on the fractal index, α, of a time series. Our setup encompasses a large class of Gaussian...

  1. Spacings and pair correlations for finite Bernoulli convolutions

    International Nuclear Information System (INIS)

    Benjamini, Itai; Solomyak, Boris

    2009-01-01

    We consider finite Bernoulli convolutions with a parameter 1/2 N . These sequences are uniformly distributed with respect to the infinite Bernoulli convolution measure ν λ , as N → ∞. Numerical evidence suggests that for a generic λ, the distribution of spacings between appropriately rescaled points is Poissonian. We obtain some partial results in this direction; for instance, we show that, on average, the pair correlations do not exhibit attraction or repulsion in the limit. On the other hand, for certain algebraic λ the behaviour is totally different

  2. Convolutional cylinder-type block-circulant cycle codes

    Directory of Open Access Journals (Sweden)

    Mohammad Gholami

    2013-06-01

    Full Text Available In this paper, we consider a class of column-weight two quasi-cyclic low-density paritycheck codes in which the girth can be large enough, as an arbitrary multiple of 8. Then we devote a convolutional form to these codes, such that their generator matrix can be obtained by elementary row and column operations on the parity-check matrix. Finally, we show that the free distance of the convolutional codes is equal to the minimum distance of their block counterparts.

  3. Weed Growth Stage Estimator Using Deep Convolutional Neural Networks

    DEFF Research Database (Denmark)

    Teimouri, Nima; Dyrmann, Mads; Nielsen, Per Rydahl

    2018-01-01

    conditions with regards to soil types, resolution and light settings. Then, 9649 of these images were used for training the computer, which automatically divided the weeds into nine growth classes. The performance of this proposed convolutional neural network approach was evaluated on a further set of 2516...... in estimating the number of leaves and 96% accuracy when accepting a deviation of two leaves. These results show that this new method of using deep convolutional neural networks has a relatively high ability to estimate early growth stages across a wide variety of weed species....

  4. Deep Convolutional Neural Networks: Structure, Feature Extraction and Training

    Directory of Open Access Journals (Sweden)

    Namatēvs Ivars

    2017-12-01

    Full Text Available Deep convolutional neural networks (CNNs are aimed at processing data that have a known network like topology. They are widely used to recognise objects in images and diagnose patterns in time series data as well as in sensor data classification. The aim of the paper is to present theoretical and practical aspects of deep CNNs in terms of convolution operation, typical layers and basic methods to be used for training and learning. Some practical applications are included for signal and image classification. Finally, the present paper describes the proposed block structure of CNN for classifying crucial features from 3D sensor data.

  5. Very deep recurrent convolutional neural network for object recognition

    Science.gov (United States)

    Brahimi, Sourour; Ben Aoun, Najib; Ben Amar, Chokri

    2017-03-01

    In recent years, Computer vision has become a very active field. This field includes methods for processing, analyzing, and understanding images. The most challenging problems in computer vision are image classification and object recognition. This paper presents a new approach for object recognition task. This approach exploits the success of the Very Deep Convolutional Neural Network for object recognition. In fact, it improves the convolutional layers by adding recurrent connections. This proposed approach was evaluated on two object recognition benchmarks: Pascal VOC 2007 and CIFAR-10. The experimental results prove the efficiency of our method in comparison with the state of the art methods.

  6. Spectral interpolation - Zero fill or convolution. [image processing

    Science.gov (United States)

    Forman, M. L.

    1977-01-01

    Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.

  7. Graphical calculus for Gaussian pure states

    International Nuclear Information System (INIS)

    Menicucci, Nicolas C.; Flammia, Steven T.; Loock, Peter van

    2011-01-01

    We provide a unified graphical calculus for all Gaussian pure states, including graph transformation rules for all local and semilocal Gaussian unitary operations, as well as local quadrature measurements. We then use this graphical calculus to analyze continuous-variable (CV) cluster states, the essential resource for one-way quantum computing with CV systems. Current graphical approaches to CV cluster states are only valid in the unphysical limit of infinite squeezing, and the associated graph transformation rules only apply when the initial and final states are of this form. Our formalism applies to all Gaussian pure states and subsumes these rules in a natural way. In addition, the term 'CV graph state' currently has several inequivalent definitions in use. Using this formalism we provide a single unifying definition that encompasses all of them. We provide many examples of how the formalism may be used in the context of CV cluster states: defining the 'closest' CV cluster state to a given Gaussian pure state and quantifying the error in the approximation due to finite squeezing; analyzing the optimality of certain methods of generating CV cluster states; drawing connections between this graphical formalism and bosonic Hamiltonians with Gaussian ground states, including those useful for CV one-way quantum computing; and deriving a graphical measure of bipartite entanglement for certain classes of CV cluster states. We mention other possible applications of this formalism and conclude with a brief note on fault tolerance in CV one-way quantum computing.

  8. Variational Gaussian approximation for Poisson data

    Science.gov (United States)

    Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen

    2018-02-01

    The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.

  9. Mode entanglement of Gaussian fermionic states

    Science.gov (United States)

    Spee, C.; Schwaiger, K.; Giedke, G.; Kraus, B.

    2018-04-01

    We investigate the entanglement of n -mode n -partite Gaussian fermionic states (GFS). First, we identify a reasonable definition of separability for GFS and derive a standard form for mixed states, to which any state can be mapped via Gaussian local unitaries (GLU). As the standard form is unique, two GFS are equivalent under GLU if and only if their standard forms coincide. Then, we investigate the important class of local operations assisted by classical communication (LOCC). These are central in entanglement theory as they allow one to partially order the entanglement contained in states. We show, however, that there are no nontrivial Gaussian LOCC (GLOCC) among pure n -partite (fully entangled) states. That is, any such GLOCC transformation can also be accomplished via GLU. To obtain further insight into the entanglement properties of such GFS, we investigate the richer class of Gaussian stochastic local operations assisted by classical communication (SLOCC). We characterize Gaussian SLOCC classes of pure n -mode n -partite states and derive them explicitly for few-mode states. Furthermore, we consider certain fermionic LOCC and show how to identify the maximally entangled set of pure n -mode n -partite GFS, i.e., the minimal set of states having the property that any other state can be obtained from one state inside this set via fermionic LOCC. We generalize these findings also to the pure m -mode n -partite (for m >n ) case.

  10. Automatic Fabric Defect Detection with a Multi-Scale Convolutional Denoising Autoencoder Network Model.

    Science.gov (United States)

    Mei, Shuang; Wang, Yudan; Wen, Guojun

    2018-04-02

    Fabric defect detection is a necessary and essential step of quality control in the textile manufacturing industry. Traditional fabric inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. In this paper, we propose an unsupervised learning-based automated approach to detect and localize fabric defects without any manual intervention. This approach is used to reconstruct image patches with a convolutional denoising autoencoder network at multiple Gaussian pyramid levels and to synthesize detection results from the corresponding resolution channels. The reconstruction residual of each image patch is used as the indicator for direct pixel-wise prediction. By segmenting and synthesizing the reconstruction residual map at each resolution level, the final inspection result can be generated. This newly developed method has several prominent advantages for fabric defect detection. First, it can be trained with only a small amount of defect-free samples. This is especially important for situations in which collecting large amounts of defective samples is difficult and impracticable. Second, owing to the multi-modal integration strategy, it is relatively more robust and accurate compared to general inspection methods (the results at each resolution level can be viewed as a modality). Third, according to our results, it can address multiple types of textile fabrics, from simple to more complex. Experimental results demonstrate that the proposed model is robust and yields good overall performance with high precision and acceptable recall rates.

  11. Automatic Fabric Defect Detection with a Multi-Scale Convolutional Denoising Autoencoder Network Model

    Directory of Open Access Journals (Sweden)

    Shuang Mei

    2018-04-01

    Full Text Available Fabric defect detection is a necessary and essential step of quality control in the textile manufacturing industry. Traditional fabric inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. In this paper, we propose an unsupervised learning-based automated approach to detect and localize fabric defects without any manual intervention. This approach is used to reconstruct image patches with a convolutional denoising autoencoder network at multiple Gaussian pyramid levels and to synthesize detection results from the corresponding resolution channels. The reconstruction residual of each image patch is used as the indicator for direct pixel-wise prediction. By segmenting and synthesizing the reconstruction residual map at each resolution level, the final inspection result can be generated. This newly developed method has several prominent advantages for fabric defect detection. First, it can be trained with only a small amount of defect-free samples. This is especially important for situations in which collecting large amounts of defective samples is difficult and impracticable. Second, owing to the multi-modal integration strategy, it is relatively more robust and accurate compared to general inspection methods (the results at each resolution level can be viewed as a modality. Third, according to our results, it can address multiple types of textile fabrics, from simple to more complex. Experimental results demonstrate that the proposed model is robust and yields good overall performance with high precision and acceptable recall rates.

  12. Efficient forward propagation of time-sequences in convolutional neural networks using Deep Shifting

    NARCIS (Netherlands)

    K.L. Groenland (Koen); S.M. Bohte (Sander)

    2016-01-01

    textabstractWhen a Convolutional Neural Network is used for on-the-fly evaluation of continuously updating time-sequences, many redundant convolution operations are performed. We propose the method of Deep Shifting, which remembers previously calculated results of convolution operations in order

  13. Using Convolutional Neural Network Filters to Measure Left-Right Mirror Symmetry in Images

    Directory of Open Access Journals (Sweden)

    Anselm Brachmann

    2016-12-01

    Full Text Available We propose a method for measuring symmetry in images by using filter responses from Convolutional Neural Networks (CNNs. The aim of the method is to model human perception of left/right symmetry as closely as possible. Using the Convolutional Neural Network (CNN approach has two main advantages: First, CNN filter responses closely match the responses of neurons in the human visual system; they take information on color, edges and texture into account simultaneously. Second, we can measure higher-order symmetry, which relies not only on color, edges and texture, but also on the shapes and objects that are depicted in images. We validated our algorithm on a dataset of 300 music album covers, which were rated according to their symmetry by 20 human observers, and compared results with those from a previously proposed method. With our method, human perception of symmetry can be predicted with high accuracy. Moreover, we demonstrate that the inclusion of features from higher CNN layers, which encode more abstract image content, increases the performance further. In conclusion, we introduce a model of left/right symmetry that closely models human perception of symmetry in CD album covers.

  14. Deep convolutional networks for pancreas segmentation in CT imaging

    Science.gov (United States)

    Roth, Holger R.; Farag, Amal; Lu, Le; Turkbey, Evrim B.; Summers, Ronald M.

    2015-03-01

    Automatic organ segmentation is an important prerequisite for many computer-aided diagnosis systems. The high anatomical variability of organs in the abdomen, such as the pancreas, prevents many segmentation methods from achieving high accuracies when compared to state-of-the-art segmentation of organs like the liver, heart or kidneys. Recently, the availability of large annotated training sets and the accessibility of affordable parallel computing resources via GPUs have made it feasible for "deep learning" methods such as convolutional networks (ConvNets) to succeed in image classification tasks. These methods have the advantage that used classification features are trained directly from the imaging data. We present a fully-automated bottom-up method for pancreas segmentation in computed tomography (CT) images of the abdomen. The method is based on hierarchical coarse-to-fine classification of local image regions (superpixels). Superpixels are extracted from the abdominal region using Simple Linear Iterative Clustering (SLIC). An initial probability response map is generated, using patch-level confidences and a two-level cascade of random forest classifiers, from which superpixel regions with probabilities larger 0.5 are retained. These retained superpixels serve as a highly sensitive initial input of the pancreas and its surroundings to a ConvNet that samples a bounding box around each superpixel at different scales (and random non-rigid deformations at training time) in order to assign a more distinct probability of each superpixel region being pancreas or not. We evaluate our method on CT images of 82 patients (60 for training, 2 for validation, and 20 for testing). Using ConvNets we achieve maximum Dice scores of an average 68% +/- 10% (range, 43-80%) in testing. This shows promise for accurate pancreas segmentation, using a deep learning approach and compares favorably to state-of-the-art methods.

  15. Non-Gaussianity in island cosmology

    International Nuclear Information System (INIS)

    Piao Yunsong

    2009-01-01

    In this paper we fully calculate the non-Gaussianity of primordial curvature perturbation of the island universe by using the second order perturbation equation. We find that for the spectral index n s ≅0.96, which is favored by current observations, the non-Gaussianity level f NL seen in an island will generally lie between 30 and 60, which may be tested by the coming observations. In the landscape, the island universe is one of anthropically acceptable cosmological histories. Thus the results obtained in some sense mean the coming observations, especially the measurement of non-Gaussianity, will be significant to clarify how our position in the landscape is populated.

  16. Entanglement negativity bounds for fermionic Gaussian states

    Science.gov (United States)

    Eisert, Jens; Eisler, Viktor; Zimborás, Zoltán

    2018-04-01

    The entanglement negativity is a versatile measure of entanglement that has numerous applications in quantum information and in condensed matter theory. It can not only efficiently be computed in the Hilbert space dimension, but for noninteracting bosonic systems, one can compute the negativity efficiently in the number of modes. However, such an efficient computation does not carry over to the fermionic realm, the ultimate reason for this being that the partial transpose of a fermionic Gaussian state is no longer Gaussian. To provide a remedy for this state of affairs, in this work, we introduce efficiently computable and rigorous upper and lower bounds to the negativity, making use of techniques of semidefinite programming, building upon the Lagrangian formulation of fermionic linear optics, and exploiting suitable products of Gaussian operators. We discuss examples in quantum many-body theory and hint at applications in the study of topological properties at finite temperature.

  17. Convolutional Sparse Coding for Static and Dynamic Images Analysis

    Directory of Open Access Journals (Sweden)

    B. A. Knyazev

    2014-01-01

    Full Text Available The objective of this work is to improve performance of static and dynamic objects recognition. For this purpose a new image representation model and a transformation algorithm are proposed. It is examined and illustrated that limitations of previous methods make it difficult to achieve this objective. Static images, specifically handwritten digits of the widely used MNIST dataset, are the primary focus of this work. Nevertheless, preliminary qualitative results of image sequences analysis based on the suggested model are presented.A general analytical form of the Gabor function, often employed to generate filters, is described and discussed. In this research, this description is required for computing parameters of responses returned by our algorithm. The recursive convolution operator is introduced, which allows extracting free shape features of visual objects. The developed parametric representation model is compared with sparse coding based on energy function minimization.In the experimental part of this work, errors of estimating the parameters of responses are determined. Also, parameters statistics and their correlation coefficients for more than 106 responses extracted from the MNIST dataset are calculated. It is demonstrated that these data correspond well with previous research studies on Gabor filters as well as with works on visual cortex primary cells of mammals, in which similar responses were observed. A comparative test of the developed model with three other approaches is conducted; speed and accuracy scores of handwritten digits classification are presented. A support vector machine with a linear or radial basic function is used for classification of images and their representations while principal component analysis is used in some cases to prepare data beforehand. High accuracy is not attained due to the specific difficulties of combining our model with a support vector machine (a 3.99% error rate. However, another method is

  18. Invariant measures on multimode quantum Gaussian states

    Science.gov (United States)

    Lupo, C.; Mancini, S.; De Pasquale, A.; Facchi, P.; Florio, G.; Pascazio, S.

    2012-12-01

    We derive the invariant measure on the manifold of multimode quantum Gaussian states, induced by the Haar measure on the group of Gaussian unitary transformations. To this end, by introducing a bipartition of the system in two disjoint subsystems, we use a parameterization highlighting the role of nonlocal degrees of freedom—the symplectic eigenvalues—which characterize quantum entanglement across the given bipartition. A finite measure is then obtained by imposing a physically motivated energy constraint. By averaging over the local degrees of freedom we finally derive the invariant distribution of the symplectic eigenvalues in some cases of particular interest for applications in quantum optics and quantum information.

  19. Invariant measures on multimode quantum Gaussian states

    International Nuclear Information System (INIS)

    Lupo, C.; Mancini, S.; De Pasquale, A.; Facchi, P.; Florio, G.; Pascazio, S.

    2012-01-01

    We derive the invariant measure on the manifold of multimode quantum Gaussian states, induced by the Haar measure on the group of Gaussian unitary transformations. To this end, by introducing a bipartition of the system in two disjoint subsystems, we use a parameterization highlighting the role of nonlocal degrees of freedom—the symplectic eigenvalues—which characterize quantum entanglement across the given bipartition. A finite measure is then obtained by imposing a physically motivated energy constraint. By averaging over the local degrees of freedom we finally derive the invariant distribution of the symplectic eigenvalues in some cases of particular interest for applications in quantum optics and quantum information.

  20. Invariant measures on multimode quantum Gaussian states

    Energy Technology Data Exchange (ETDEWEB)

    Lupo, C. [School of Science and Technology, Universita di Camerino, I-62032 Camerino (Italy); Mancini, S. [School of Science and Technology, Universita di Camerino, I-62032 Camerino (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Perugia, I-06123 Perugia (Italy); De Pasquale, A. [NEST, Scuola Normale Superiore and Istituto Nanoscienze-CNR, I-56126 Pisa (Italy); Facchi, P. [Dipartimento di Matematica and MECENAS, Universita di Bari, I-70125 Bari (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Bari, I-70126 Bari (Italy); Florio, G. [Istituto Nazionale di Fisica Nucleare, Sezione di Bari, I-70126 Bari (Italy); Museo Storico della Fisica e Centro Studi e Ricerche Enrico Fermi, Piazza del Viminale 1, I-00184 Roma (Italy); Dipartimento di Fisica and MECENAS, Universita di Bari, I-70126 Bari (Italy); Pascazio, S. [Istituto Nazionale di Fisica Nucleare, Sezione di Bari, I-70126 Bari (Italy); Dipartimento di Fisica and MECENAS, Universita di Bari, I-70126 Bari (Italy)

    2012-12-15

    We derive the invariant measure on the manifold of multimode quantum Gaussian states, induced by the Haar measure on the group of Gaussian unitary transformations. To this end, by introducing a bipartition of the system in two disjoint subsystems, we use a parameterization highlighting the role of nonlocal degrees of freedom-the symplectic eigenvalues-which characterize quantum entanglement across the given bipartition. A finite measure is then obtained by imposing a physically motivated energy constraint. By averaging over the local degrees of freedom we finally derive the invariant distribution of the symplectic eigenvalues in some cases of particular interest for applications in quantum optics and quantum information.

  1. Construction of Capacity Achieving Lattice Gaussian Codes

    KAUST Repository

    Alghamdi, Wael

    2016-04-01

    We propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble [1] to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore [3].

  2. Gaussian processes and constructive scalar field theory

    International Nuclear Information System (INIS)

    Benfatto, G.; Nicolo, F.

    1981-01-01

    The last years have seen a very deep progress of constructive euclidean field theory, with many implications in the area of the random fields theory. The authors discuss an approach to super-renormalizable scalar field theories, which puts in particular evidence the connections with the theory of the Gaussian processes associated to the elliptic operators. The paper consists of two parts. Part I treats some problems in the theory of Gaussian processes which arise in the approach to the PHI 3 4 theory. Part II is devoted to the discussion of the ultraviolet stability in the PHI 3 4 theory. (Auth.)

  3. Integration of non-Gaussian fields

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Mohr, Gunnar; Hoffmeyer, Pernille

    1996-01-01

    The limitations of the validity of the central limit theorem argument as applied to definite integrals of non-Gaussian random fields are empirically explored by way of examples. The purpose is to investigate in specific cases whether the asymptotic convergence to the Gaussian distribution is fast....... and Randrup-Thomsen, S. Reliability of silo ring under lognormal stochastic pressure using stochastic interpolation. Proc. IUTAM Symp., Probabilistic Structural Mechanics: Advances in Structural Reliability Methods, San Antonio, TX, USA, June 1993 (eds.: P. D. Spanos & Y.-T. Wu) pp. 134-162. Springer, Berlin...

  4. Quantum information theory with Gaussian systems

    Energy Technology Data Exchange (ETDEWEB)

    Krueger, O.

    2006-04-06

    This thesis applies ideas and concepts from quantum information theory to systems of continuous-variables such as the quantum harmonic oscillator. The focus is on three topics: the cloning of coherent states, Gaussian quantum cellular automata and Gaussian private channels. Cloning was investigated both for finite-dimensional and for continuous-variable systems. We construct a private quantum channel for the sequential encryption of coherent states with a classical key, where the key elements have finite precision. For the case of independent one-mode input states, we explicitly estimate this precision, i.e. the number of key bits needed per input state, in terms of these parameters. (orig.)

  5. Quantum information theory with Gaussian systems

    International Nuclear Information System (INIS)

    Krueger, O.

    2006-01-01

    This thesis applies ideas and concepts from quantum information theory to systems of continuous-variables such as the quantum harmonic oscillator. The focus is on three topics: the cloning of coherent states, Gaussian quantum cellular automata and Gaussian private channels. Cloning was investigated both for finite-dimensional and for continuous-variable systems. We construct a private quantum channel for the sequential encryption of coherent states with a classical key, where the key elements have finite precision. For the case of independent one-mode input states, we explicitly estimate this precision, i.e. the number of key bits needed per input state, in terms of these parameters. (orig.)

  6. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  7. Wavelength interrogation of fiber Bragg grating sensors based on crossed optical Gaussian filters.

    Science.gov (United States)

    Cheng, Rui; Xia, Li; Zhou, Jiaao; Liu, Deming

    2015-04-15

    Conventional intensity-modulated measurements require to be operated in linear range of filter or interferometric response to ensure a linear detection. Here, we present a wavelength interrogation system for fiber Bragg grating sensors where the linear transition is achieved with crossed Gaussian transmissions. This unique filtering characteristic makes the responses of the two branch detections follow Gaussian functions with the same parameters except for a delay. The substraction of these two delayed Gaussian responses (in dB) ultimately leads to a linear behavior, which is exploited for the sensor wavelength determination. Beside its flexibility and inherently power insensitivity, the proposal also shows a potential of a much wider operational range. Interrogation of a strain-tuned grating was accomplished, with a wide sensitivity tuning range from 2.56 to 8.7 dB/nm achieved.

  8. Quantum steering of multimode Gaussian states by Gaussian measurements: monogamy relations and the Peres conjecture

    International Nuclear Information System (INIS)

    Ji, Se-Wan; Nha, Hyunchul; Kim, M S

    2015-01-01

    It is a topic of fundamental and practical importance how a quantum correlated state can be reliably distributed through a noisy channel for quantum information processing. The concept of quantum steering recently defined in a rigorous manner is relevant to study it under certain circumstances and here we address quantum steerability of Gaussian states to this aim. In particular, we attempt to reformulate the criterion for Gaussian steering in terms of local and global purities and show that it is sufficient and necessary for the case of steering a 1-mode system by an N-mode system. It subsequently enables us to reinforce a strong monogamy relation under which only one party can steer a local system of 1-mode. Moreover, we show that only a negative partial-transpose state can manifest quantum steerability by Gaussian measurements in relation to the Peres conjecture. We also discuss our formulation for the case of distributing a two-mode squeezed state via one-way quantum channels making dissipation and amplification effects, respectively. Finally, we extend our approach to include non-Gaussian measurements, more precisely, all orders of higher-order squeezing measurements, and find that this broad set of non-Gaussian measurements is not useful to demonstrate steering for Gaussian states beyond Gaussian measurements. (paper)

  9. Discrete singular convolution for the generalized variable-coefficient ...

    African Journals Online (AJOL)

    Numerical solutions of the generalized variable-coefficient Korteweg-de Vries equation are obtained using a discrete singular convolution and a fourth order singly diagonally implicit Runge-Kutta method for space and time discretisation, respectively. The theoretical convergence of the proposed method is rigorously ...

  10. Down image recognition based on deep convolutional neural network

    Directory of Open Access Journals (Sweden)

    Wenzhu Yang

    2018-06-01

    Full Text Available Since of the scale and the various shapes of down in the image, it is difficult for traditional image recognition method to correctly recognize the type of down image and get the required recognition accuracy, even for the Traditional Convolutional Neural Network (TCNN. To deal with the above problems, a Deep Convolutional Neural Network (DCNN for down image classification is constructed, and a new weight initialization method is proposed. Firstly, the salient regions of a down image were cut from the image using the visual saliency model. Then, these salient regions of the image were used to train a sparse autoencoder and get a collection of convolutional filters, which accord with the statistical characteristics of dataset. At last, a DCNN with Inception module and its variants was constructed. To improve the recognition accuracy, the depth of the network is deepened. The experiment results indicate that the constructed DCNN increases the recognition accuracy by 2.7% compared to TCNN, when recognizing the down in the images. The convergence rate of the proposed DCNN with the new weight initialization method is improved by 25.5% compared to TCNN. Keywords: Deep convolutional neural network, Weight initialization, Sparse autoencoder, Visual saliency model, Image recognition

  11. Face recognition: a convolutional neural-network approach.

    Science.gov (United States)

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  12. Training Convolutional Neural Networks for Translational Invariance on SAR ATR

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Engholm, Rasmus; Østergaard Pedersen, Morten

    2016-01-01

    In this paper we present a comparison of the robustness of Convolutional Neural Networks (CNN) to other classifiers in the presence of uncertainty of the objects localization in SAR image. We present a framework for simulating simple SAR images, translating the object of interest systematically...

  13. An Interactive Graphics Program for Assistance in Learning Convolution.

    Science.gov (United States)

    Frederick, Dean K.; Waag, Gary L.

    1980-01-01

    A program has been written for the interactive computer graphics facility at Rensselaer Polytechnic Institute that is designed to assist the user in learning the mathematical technique of convolving two functions. Because convolution can be represented graphically by a sequence of steps involving folding, shifting, multiplying, and integration, it…

  14. Diffraction and Dirchlet problem for parameter-elliptic convolution ...

    African Journals Online (AJOL)

    In this paper we evaluate the difference between the inverse operators of a Dirichlet problem and of a diffraction problem for parameter-elliptic convolution operators with constant symbols. We prove that the inverse operator of a Dirichlet problem can be obtained as a limit case of such a diffraction problem. Quaestiones ...

  15. Review of the convolution algorithm for evaluating service integrated systems

    DEFF Research Database (Denmark)

    Iversen, Villy Bæk

    1997-01-01

    In this paper we give a review of the applicability of the convolution algorithm. By this we are able to evaluate communication networks end--to--end with e.g. BPP multi-ratetraffic models insensitive to the holding time distribution. Rearrangement, minimum allocation, and maximum allocation...

  16. A convolutional neural network to filter artifacts in spectroscopic MRI.

    Science.gov (United States)

    Gurbani, Saumya S; Schreibmann, Eduard; Maudsley, Andrew A; Cordova, James Scott; Soher, Brian J; Poptani, Harish; Verma, Gaurav; Barker, Peter B; Shim, Hyunsuk; Cooper, Lee A D

    2018-03-09

    Proton MRSI is a noninvasive modality capable of generating volumetric maps of in vivo tissue metabolism without the need for ionizing radiation or injected contrast agent. Magnetic resonance spectroscopic imaging has been shown to be a viable imaging modality for studying several neuropathologies. However, a key hurdle in the routine clinical adoption of MRSI is the presence of spectral artifacts that can arise from a number of sources, possibly leading to false information. A deep learning model was developed that was capable of identifying and filtering out poor quality spectra. The core of the model used a tiled convolutional neural network that analyzed frequency-domain spectra to detect artifacts. When compared with a panel of MRS experts, our convolutional neural network achieved high sensitivity and specificity with an area under the curve of 0.95. A visualization scheme was implemented to better understand how the convolutional neural network made its judgement on single-voxel or multivoxel MRSI, and the convolutional neural network was embedded into a pipeline capable of producing whole-brain spectroscopic MRI volumes in real time. The fully automated method for assessment of spectral quality provides a valuable tool to support clinical MRSI or spectroscopic MRI studies for use in fields such as adaptive radiation therapy planning. © 2018 International Society for Magnetic Resonance in Medicine.

  17. Deep convolutional neural networks for detection of rail surface defects

    NARCIS (Netherlands)

    Faghih Roohi, S.; Hajizadeh, S.; Nunez Vicencio, Alfredo; Babuska, R.; De Schutter, B.H.K.; Estevez, Pablo A.; Angelov, Plamen P.; Del Moral Hernandez, Emilio

    2016-01-01

    In this paper, we propose a deep convolutional neural network solution to the analysis of image data for the detection of rail surface defects. The images are obtained from many hours of automated video recordings. This huge amount of data makes it impossible to manually inspect the images and

  18. Symbol Stream Combining in a Convolutionally Coded System

    Science.gov (United States)

    Mceliece, R. J.; Pollara, F.; Swanson, L.

    1985-01-01

    Symbol stream combining has been proposed as a method for arraying signals received at different antennas. If convolutional coding and Viterbi decoding are used, it is shown that a Viterbi decoder based on the proposed weighted sum of symbol streams yields maximum likelihood decisions.

  19. Two-level convolution formula for nuclear structure function

    Science.gov (United States)

    Ma, Boqiang

    1990-05-01

    A two-level convolution formula for the nuclear structure function is derived in considering the nucleus as a composite system of baryon-mesons which are also composite systems of quark-gluons again. The results show that the European Muon Colaboration effect can not be explained by the nuclear effects as nucleon Fermi motion and nuclear binding contributions.

  20. Two-level convolution formula for nuclear structure function

    International Nuclear Information System (INIS)

    Ma Boqiang

    1990-01-01

    A two-level convolution formula for the nuclear structure function is derived in considering the nucleus as a composite system of baryon-mesons which are also composite systems of quark-gluons again. The results show that the European Muon Colaboration effect can not be explained by the nuclear effects as nucleon Fermi motion and nuclear binding contributions

  1. Plant species classification using deep convolutional neural network

    DEFF Research Database (Denmark)

    Dyrmann, Mads; Karstoft, Henrik; Midtiby, Henrik Skov

    2016-01-01

    Information on which weed species are present within agricultural fields is important for site specific weed management. This paper presents a method that is capable of recognising plant species in colour images by using a convolutional neural network. The network is built from scratch trained an...

  2. Heat source reconstruction from noisy temperature fields using an optimised derivative Gaussian filter

    Science.gov (United States)

    Delpueyo, D.; Balandraud, X.; Grédiac, M.

    2013-09-01

    The aim of this paper is to present a post-processing technique based on a derivative Gaussian filter to reconstruct heat source fields from temperature fields measured by infrared thermography. Heat sources can be deduced from temperature variations thanks to the heat diffusion equation. Filtering and differentiating are key-issues which are closely related here because the temperature fields which are processed are unavoidably noisy. We focus here only on the diffusion term because it is the most difficult term to estimate in the procedure, the reason being that it involves spatial second derivatives (a Laplacian for isotropic materials). This quantity can be reasonably estimated using a convolution of the temperature variation fields with second derivatives of a Gaussian function. The study is first based on synthetic temperature variation fields corrupted by added noise. The filter is optimised in order to reconstruct at best the heat source fields. The influence of both the dimension and the level of a localised heat source is discussed. Obtained results are also compared with another type of processing based on an averaging filter. The second part of this study presents an application to experimental temperature fields measured with an infrared camera on a thin plate in aluminium alloy. Heat sources are generated with an electric heating patch glued on the specimen surface. Heat source fields reconstructed from measured temperature fields are compared with the imposed heat sources. Obtained results illustrate the relevancy of the derivative Gaussian filter to reliably extract heat sources from noisy temperature fields for the experimental thermomechanics of materials.

  3. Application of the Convolution Formalism to the Ocean Tide Potential: Results from the Gravity and Recovery and Climate Experiment (GRACE)

    Science.gov (United States)

    Desai, S. D.; Yuan, D. -N.

    2006-01-01

    A computationally efficient approach to reducing omission errors in ocean tide potential models is derived and evaluated using data from the Gravity Recovery and Climate Experiment (GRACE) mission. Ocean tide height models are usually explicitly available at a few frequencies, and a smooth unit response is assumed to infer the response across the tidal spectrum. The convolution formalism of Munk and Cartwright (1966) models this response function with a Fourier series. This allows the total ocean tide height, and therefore the total ocean tide potential, to be modeled as a weighted sum of past, present, and future values of the tide-generating potential. Previous applications of the convolution formalism have usually been limited to tide height models, but we extend it to ocean tide potential models. We use luni-solar ephemerides to derive the required tide-generating potential so that the complete spectrum of the ocean tide potential is efficiently represented. In contrast, the traditionally adopted harmonic model of the ocean tide potential requires the explicit sum of the contributions from individual tidal frequencies. It is therefore subject to omission errors from neglected frequencies and is computationally more intensive. Intersatellite range rate data from the GRACE mission are used to compare convolution and harmonic models of the ocean tide potential. The monthly range rate residual variance is smaller by 4-5%, and the daily residual variance is smaller by as much as 15% when using the convolution model than when using a harmonic model that is defined by twice the number of parameters.

  4. Developing convolutional neural networks for measuring climate change opinions from social media data

    Science.gov (United States)

    Mao, H.; Bhaduri, B. L.

    2016-12-01

    Understanding public opinions on climate change is important for policy making. Public opinion, however, is typically measured with national surveys, which are often too expensive and thus being updated at a low frequency. Twitter has become a major platform for people to express their opinions on social and political issues. Our work attempts to understand if Twitter data can provide complimentary insights about climate change perceptions. Since the nature of social media is real-time, this data source can especially help us understand how public opinion changes over time in response to climate events and hazards, which though is very difficult to be captured by manual surveys. We use the Twitter Streaming API to collect tweets that contain keywords, "climate change" or "#climatechange". Traditional machine-learning based opinion mining algorithms require a significant amount of labeled data. Data labeling is notoriously time consuming. To address this problem, we use hashtags (a significant feature used to mark topics of tweets) to annotate tweets automatically. For example, hashtags, #climatedenial and #climatescam, are negative opinion labels, while #actonclimate and #climateaction are positive. Following this method, we can obtain a large amount of training data without human labor. This labeled dataset is used to train a deep convolutional neural network that classifies tweets into positive (i.e. believe in climate change) and negative (i.e. do not believe). Based on the positive/negative tweets obtained, we will further analyze risk perceptions and opinions towards policy support. In addition, we analyze twitter user profiles to understand the demographics of proponents and opponents of climate change. Deep learning techniques, especially convolutional deep neural networks, have achieved much success in computer vision. In this work, we propose a convolutional neural network architecture for understanding opinions within text. This method is compared with

  5. Convolutional neural networks for event-related potential detection: impact of the architecture.

    Science.gov (United States)

    Cecotti, H

    2017-07-01

    The detection of brain responses at the single-trial level in the electroencephalogram (EEG) such as event-related potentials (ERPs) is a difficult problem that requires different processing steps to extract relevant discriminant features. While most of the signal and classification techniques for the detection of brain responses are based on linear algebra, different pattern recognition techniques such as convolutional neural network (CNN), as a type of deep learning technique, have shown some interests as they are able to process the signal after limited pre-processing. In this study, we propose to investigate the performance of CNNs in relation of their architecture and in relation to how they are evaluated: a single system for each subject, or a system for all the subjects. More particularly, we want to address the change of performance that can be observed between specifying a neural network to a subject, or by considering a neural network for a group of subjects, taking advantage of a larger number of trials from different subjects. The results support the conclusion that a convolutional neural network trained on different subjects can lead to an AUC above 0.9 by using an appropriate architecture using spatial filtering and shift invariant layers.

  6. How Gaussian can our Universe be?

    Science.gov (United States)

    Cabass, G.; Pajer, E.; Schmidt, F.

    2017-01-01

    Gravity is a non-linear theory, and hence, barring cancellations, the initial super-horizon perturbations produced by inflation must contain some minimum amount of mode coupling, or primordial non-Gaussianity. In single-field slow-roll models, where this lower bound is saturated, non-Gaussianity is controlled by two observables: the tensor-to-scalar ratio, which is uncertain by more than fifty orders of magnitude; and the scalar spectral index, or tilt, which is relatively well measured. It is well known that to leading and next-to-leading order in derivatives, the contributions proportional to the tilt disappear from any local observable, and suspicion has been raised that this might happen to all orders, allowing for an arbitrarily low amount of primordial non-Gaussianity. Employing Conformal Fermi Coordinates, we show explicitly that this is not the case. Instead, a contribution of order the tilt appears in local observables. In summary, the floor of physical primordial non-Gaussianity in our Universe has a squeezed-limit scaling of kl2/ks2, similar to equilateral and orthogonal shapes, and a dimensionless amplitude of order 0.1 × (ns-1).

  7. Gaussian vector fields on triangulated surfaces

    DEFF Research Database (Denmark)

    Ipsen, John H

    2016-01-01

    proven to be very useful to resolve the complex interplay between in-plane ordering of membranes and membrane conformations. In the present work we have developed a procedure for realistic representations of Gaussian models with in-plane vector degrees of freedoms on a triangulated surface. The method...

  8. The Wehrl entropy has Gaussian optimizers

    DEFF Research Database (Denmark)

    De Palma, Giacomo

    2018-01-01

    We determine the minimum Wehrl entropy among the quantum states with a given von Neumann entropy and prove that it is achieved by thermal Gaussian states. This result determines the relation between the von Neumann and the Wehrl entropies. The key idea is proving that the quantum-classical channel...

  9. How Gaussian can our Universe be?

    Energy Technology Data Exchange (ETDEWEB)

    Cabass, G. [Physics Department and INFN, Università di Roma ' ' La Sapienza' ' , P.le Aldo Moro 2, 00185, Rome (Italy); Pajer, E. [Institute for Theoretical Physics and Center for Extreme Matter and Emergent Phenomena, Utrecht University, Princetonplein 5, 3584 CC Utrecht (Netherlands); Schmidt, F., E-mail: giovanni.cabass@roma1.infn.it, E-mail: e.pajer@uu.nl, E-mail: fabians@mpa-garching.mpg.de [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)

    2017-01-01

    Gravity is a non-linear theory, and hence, barring cancellations, the initial super-horizon perturbations produced by inflation must contain some minimum amount of mode coupling, or primordial non-Gaussianity. In single-field slow-roll models, where this lower bound is saturated, non-Gaussianity is controlled by two observables: the tensor-to-scalar ratio, which is uncertain by more than fifty orders of magnitude; and the scalar spectral index, or tilt, which is relatively well measured. It is well known that to leading and next-to-leading order in derivatives, the contributions proportional to the tilt disappear from any local observable, and suspicion has been raised that this might happen to all orders, allowing for an arbitrarily low amount of primordial non-Gaussianity. Employing Conformal Fermi Coordinates, we show explicitly that this is not the case. Instead, a contribution of order the tilt appears in local observables. In summary, the floor of physical primordial non-Gaussianity in our Universe has a squeezed-limit scaling of k {sub ℓ}{sup 2}/ k {sub s} {sup 2}, similar to equilateral and orthogonal shapes, and a dimensionless amplitude of order 0.1 × ( n {sub s}−1).

  10. Gaussian shaping filter for nuclear spectrometry

    International Nuclear Information System (INIS)

    Menezes, A.S.C. de.

    1980-01-01

    A theorical study of a gaussian shaping filter, using Pade approximation, for using in gamma spectroscopy is presented. This approximation has proved superior to the classical cascade RC integrators approximation in therms of signal-to-noise ratio and pulse simmetry. An experimental filter was designed, simulated in computer, constructed, and tested in the laboratory. (author) [pt

  11. Asymptotic expansions for the Gaussian unitary ensemble

    DEFF Research Database (Denmark)

    Haagerup, Uffe; Thorbjørnsen, Steen

    2012-01-01

    Let g : R ¿ C be a C8-function with all derivatives bounded and let trn denote the normalized trace on the n × n matrices. In Ref. 3 Ercolani and McLaughlin established asymptotic expansions of the mean value ¿{trn(g(Xn))} for a rather general class of random matrices Xn, including the Gaussian U...

  12. Chimera states in Gaussian coupled map lattices

    Science.gov (United States)

    Li, Xiao-Wen; Bi, Ran; Sun, Yue-Xiang; Zhang, Shuo; Song, Qian-Qian

    2018-04-01

    We study chimera states in one-dimensional and two-dimensional Gaussian coupled map lattices through simulations and experiments. Similar to the case of global coupling oscillators, individual lattices can be regarded as being controlled by a common mean field. A space-dependent order parameter is derived from a self-consistency condition in order to represent the collective state.

  13. Gaussian curvature on hyperelliptic Riemann surfaces

    Indian Academy of Sciences (India)

    Indian Acad. Sci. (Math. Sci.) Vol. 124, No. 2, May 2014, pp. 155–167. c Indian Academy of Sciences. Gaussian curvature on hyperelliptic Riemann surfaces. ABEL CASTORENA. Centro de Ciencias Matemáticas (Universidad Nacional Autónoma de México,. Campus Morelia) Apdo. Postal 61-3 Xangari, C.P. 58089 Morelia,.

  14. Additivity properties of a Gaussian channel

    International Nuclear Information System (INIS)

    Giovannetti, Vittorio; Lloyd, Seth

    2004-01-01

    The Amosov-Holevo-Werner conjecture implies the additivity of the minimum Renyi entropies at the output of a channel. The conjecture is proven true for all Renyi entropies of integer order greater than two in a class of Gaussian bosonic channel where the input signal is randomly displaced or where it is coupled linearly to an external environment

  15. Modeling text with generalizable Gaussian mixtures

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Sigurdsson, Sigurdur; Kolenda, Thomas

    2000-01-01

    We apply and discuss generalizable Gaussian mixture (GGM) models for text mining. The model automatically adapts model complexity for a given text representation. We show that the generalizability of these models depends on the dimensionality of the representation and the sample size. We discuss...

  16. Improving the gaussian effective potential: quantum mechanics

    International Nuclear Information System (INIS)

    Eboli, O.J.P.; Thomaz, M.T.; Lemos, N.A.

    1990-08-01

    In order to gain intuition for variational problems in field theory, we analyze variationally the quantum-mechanical anharmonic oscillator [(V(x)sup(k) - sub(2) x sup(2) + sup(λ) - sub(4) λ sup(4)]. Special attention is paid to improvements to the Gaussian effective potential. (author)

  17. Open problems in Gaussian fluid queueing theory

    NARCIS (Netherlands)

    Dȩbicki, K.; Mandjes, M.

    2011-01-01

    We present three challenging open problems that originate from the analysis of the asymptotic behavior of Gaussian fluid queueing models. In particular, we address the problem of characterizing the correlation structure of the stationary buffer content process, the speed of convergence to

  18. Oracle Wiener filtering of a Gaussian signal

    NARCIS (Netherlands)

    Babenko, A.; Belitser, E.

    2011-01-01

    We study the problem of filtering a Gaussian process whose trajectories, in some sense, have an unknown smoothness ß0 from the white noise of small intensity e. If we knew the parameter ß0, we would use the Wiener filter which has the meaning of oracle. Our goal is now to mimic the oracle, i.e.,

  19. Oracle Wiener filtering of a Gaussian signal

    NARCIS (Netherlands)

    Babenko, A.; Belitser, E.N.

    2011-01-01

    We study the problem of filtering a Gaussian process whose trajectories, in some sense, have an unknown smoothness β0 from the white noise of small intensity . If we knew the parameter β0, we would use the Wiener filter which has the meaning of oracle. Our goal is now to mimic the oracle, i.e.,

  20. An Implementation of Error Minimization Data Transmission in OFDM using Modified Convolutional Code

    Directory of Open Access Journals (Sweden)

    Hendy Briantoro

    2016-04-01

    Full Text Available This paper presents about error minimization in OFDM system. In conventional system, usually using channel coding such as BCH Code or Convolutional Code. But, performance BCH Code or Convolutional Code is not good in implementation of OFDM System. Error bits of OFDM system without channel coding is 5.77%. Then, we used convolutional code with code rate 1/2, it can reduce error bitsonly up to 3.85%. So, we proposed OFDM system with Modified Convolutional Code. In this implementation, we used Software Define Radio (SDR, namely Universal Software Radio Peripheral (USRP NI 2920 as the transmitter and receiver. The result of OFDM system using Modified Convolutional Code with code rate is able recover all character received so can decrease until 0% error bit. Increasing performance of Modified Convolutional Code is about 1 dB in BER of 10-4 from BCH Code and Convolutional Code. So, performance of Modified Convolutional better than BCH Code or Convolutional Code. Keywords: OFDM, BCH Code, Convolutional Code, Modified Convolutional Code, SDR, USRP

  1. Estimators for local non-Gaussianities

    International Nuclear Information System (INIS)

    Creminelli, P.; Senatore, L.; Zaldarriaga, M.

    2006-05-01

    We study the Likelihood function of data given f NL for the so-called local type of non-Gaussianity. In this case the curvature perturbation is a non-linear function, local in real space, of a Gaussian random field. We compute the Cramer-Rao bound for f NL and show that for small values of f NL the 3- point function estimator saturates the bound and is equivalent to calculating the full Likelihood of the data. However, for sufficiently large f NL , the naive 3-point function estimator has a much larger variance than previously thought. In the limit in which the departure from Gaussianity is detected with high confidence, error bars on f NL only decrease as 1/ln N pix rather than N pix -1/2 as the size of the data set increases. We identify the physical origin of this behavior and explain why it only affects the local type of non- Gaussianity, where the contribution of the first multipoles is always relevant. We find a simple improvement to the 3-point function estimator that makes the square root of its variance decrease as N pix -1/2 even for large f NL , asymptotically approaching the Cramer-Rao bound. We show that using the modified estimator is practically equivalent to computing the full Likelihood of f NL given the data. Thus other statistics of the data, such as the 4-point function and Minkowski functionals, contain no additional information on f NL . In particular, we explicitly show that the recent claims about the relevance of the 4-point function are not correct. By direct inspection of the Likelihood, we show that the data do not contain enough information for any statistic to be able to constrain higher order terms in the relation between the Gaussian field and the curvature perturbation, unless these are orders of magnitude larger than the size suggested by the current limits on f NL . (author)

  2. Cosmological information in Gaussianized weak lensing signals

    Science.gov (United States)

    Joachimi, B.; Taylor, A. N.; Kiessling, A.

    2011-11-01

    Gaussianizing the one-point distribution of the weak gravitational lensing convergence has recently been shown to increase the signal-to-noise ratio contained in two-point statistics. We investigate the information on cosmology that can be extracted from the transformed convergence fields. Employing Box-Cox transformations to determine optimal transformations to Gaussianity, we develop analytical models for the transformed power spectrum, including effects of noise and smoothing. We find that optimized Box-Cox transformations perform substantially better than an offset logarithmic transformation in Gaussianizing the convergence, but both yield very similar results for the signal-to-noise ratio. None of the transformations is capable of eliminating correlations of the power spectra between different angular frequencies, which we demonstrate to have a significant impact on the errors in cosmology. Analytic models of the Gaussianized power spectrum yield good fits to the simulations and produce unbiased parameter estimates in the majority of cases, where the exceptions can be traced back to the limitations in modelling the higher order correlations of the original convergence. In the ideal case, without galaxy shape noise, we find an increase in the cumulative signal-to-noise ratio by a factor of 2.6 for angular frequencies up to ℓ= 1500, and a decrease in the area of the confidence region in the Ωm-σ8 plane, measured in terms of q-values, by a factor of 4.4 for the best performing transformation. When adding a realistic level of shape noise, all transformations perform poorly with little decorrelation of angular frequencies, a maximum increase in signal-to-noise ratio of 34 per cent, and even slightly degraded errors on cosmological parameters. We argue that to find Gaussianizing transformations of practical use, it will be necessary to go beyond transformations of the one-point distribution of the convergence, extend the analysis deeper into the non

  3. Elasto-plastic frame under horizontal and vertical Gaussian excitation

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Tarp-Johansen, Niels Jacob; Randrup-Thomsen, S.

    1999-01-01

    Taking geometric non-linearity into account anoscillator of the form as aportal frame with a rigid traverse and with ideal-elastic ideal-plasticclamped-in columns behaves under horizontalexcitation as an ideal-elastic hardening / softening-plastic oscilator given that the columns carry atension....../compression axial force. Assuming that the horizontal excitationof the traverse is Gaussian white noise, statistics related to the plastic displacement response are determinedby use of simulation based on the Slepian modelprocess method combined with envelope excursion properties. Besidesgiving physical insight...... the method givesgood approximations to results obtained by slow direct simulation of thetotal response. Moreover, the influence of a randomly varying axial column force isinvestigated by direct response simulation. This case corresponds to parametric excitation as generated by the vertical acceleration...

  4. Learning non-Gaussian Time Series using the Box-Cox Gaussian Process

    OpenAIRE

    Rios, Gonzalo; Tobar, Felipe

    2018-01-01

    Gaussian processes (GPs) are Bayesian nonparametric generative models that provide interpretability of hyperparameters, admit closed-form expressions for training and inference, and are able to accurately represent uncertainty. To model general non-Gaussian data with complex correlation structure, GPs can be paired with an expressive covariance kernel and then fed into a nonlinear transformation (or warping). However, overparametrising the kernel and the warping is known to, respectively, hin...

  5. Limitations of a convolution method for modeling geometric uncertainties in radiation therapy: the radiobiological dose-per-fraction effect

    International Nuclear Information System (INIS)

    Song, William; Battista, Jerry; Van Dyk, Jake

    2004-01-01

    The convolution method can be used to model the effect of random geometric uncertainties into planned dose distributions used in radiation treatment planning. This is effectively done by linearly adding infinitesimally small doses, each with a particular geometric offset, over an assumed infinite number of fractions. However, this process inherently ignores the radiobiological dose-per-fraction effect since only the summed physical dose distribution is generated. The resultant potential error on predicted radiobiological outcome [quantified in this work with tumor control probability (TCP), equivalent uniform dose (EUD), normal tissue complication probability (NTCP), and generalized equivalent uniform dose (gEUD)] has yet to be thoroughly quantified. In this work, the results of a Monte Carlo simulation of geometric displacements are compared to those of the convolution method for random geometric uncertainties of 0, 1, 2, 3, 4, and 5 mm (standard deviation). The α/β CTV ratios of 0.8, 1.5, 3, 5, and 10 Gy are used to represent the range of radiation responses for different tumors, whereas a single α/β OAR ratio of 3 Gy is used to represent all the organs at risk (OAR). The analysis is performed on a four-field prostate treatment plan of 18 MV x rays. The fraction numbers are varied from 1-50, with isoeffective adjustments of the corresponding dose-per-fractions to maintain a constant tumor control, using the linear-quadratic cell survival model. The average differences in TCP and EUD of the target, and in NTCP and gEUD of the OAR calculated from the convolution and Monte Carlo methods reduced asymptotically as the total fraction number increased, with the differences reaching negligible levels beyond the treatment fraction number of ≥20. The convolution method generally overestimates the radiobiological indices, as compared to the Monte Carlo method, for the target volume, and underestimates those for the OAR. These effects are interconnected and attributed

  6. Validity of the assumption of Gaussian turbulence; Gyldighed af antagelsen om Gaussisk turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, M.; Hansen, K.S.; Juul Pedersen, B.

    2000-07-01

    Wind turbines are designed to withstand the impact of turbulent winds, which fluctuations usually are assumed of Gaussian probability distribution. Based on a large number of measurements from many sites, this seems a reasonable assumption in flat homogeneous terrain whereas it may fail in complex terrain. At these sites the wind speed often has a skew distribution with more frequent lulls than gusts. In order to simulate aerodynamic loads, a numerical turbulence simulation method was developed and implemented. This method may simulate multiple time series of variable not necessarily Gaussian distribution without distortion of the spectral distribution or spatial coherence. The simulated time series were used as input to the dynamic-response simulation program Vestas Turbine Simulator (VTS). In this way we simulated the dynamic response of systems exposed to turbulence of either Gaussian or extreme, yet realistic, non-Gaussian probability distribution. Certain loads on turbines with active pitch regulation were enhanced by up to 15% compared to pure Gaussian turbulence. It should, however, be said that the undesired effect depends on the dynamic system, and it might be mitigated by optimisation of the wind turbine regulation system after local turbulence characteristics. (au)

  7. MCEM algorithm for the log-Gaussian Cox process

    OpenAIRE

    Delmas, Celine; Dubois-Peyrard, Nathalie; Sabbadin, Regis

    2014-01-01

    Log-Gaussian Cox processes are an important class of models for aggregated point patterns. They have been largely used in spatial epidemiology (Diggle et al., 2005), in agronomy (Bourgeois et al., 2012), in forestry (Moller et al.), in ecology (sightings of wild animals) or in environmental sciences (radioactivity counts). A log-Gaussian Cox process is a Poisson process with a stochastic intensity depending on a Gaussian random eld. We consider the case where this Gaussian random eld is ...

  8. Large deviations for Gaussian processes in Hoelder norm

    International Nuclear Information System (INIS)

    Fatalov, V R

    2003-01-01

    Some results are proved on the exact asymptotic representation of large deviation probabilities for Gaussian processes in the Hoeder norm. The following classes of processes are considered: the Wiener process, the Brownian bridge, fractional Brownian motion, and stationary Gaussian processes with power-law covariance function. The investigation uses the method of double sums for Gaussian fields

  9. Phase space structure of generalized Gaussian cat states

    International Nuclear Information System (INIS)

    Nicacio, Fernando; Maia, Raphael N.P.; Toscano, Fabricio; Vallejos, Raul O.

    2010-01-01

    We analyze generalized Gaussian cat states obtained by superposing arbitrary Gaussian states. The structure of the interference term of the Wigner function is always hyperbolic, surviving the action of a thermal reservoir. We also consider certain superpositions of mixed Gaussian states. An application to semiclassical dynamics is discussed.

  10. Linking network usage patterns to traffic Gaussianity fit

    NARCIS (Netherlands)

    de Oliveira Schmidt, R.; Sadre, R.; Melnikov, Nikolay; Schönwälder, Jürgen; Pras, Aiko

    Gaussian traffic models are widely used in the domain of network traffic modeling. The central assumption is that traffic aggregates are Gaussian distributed. Due to its importance, the Gaussian character of network traffic has been extensively assessed by researchers in the past years. In 2001,

  11. Yarn-dyed fabric defect classification based on convolutional neural network

    Science.gov (United States)

    Jing, Junfeng; Dong, Amei; Li, Pengfei; Zhang, Kaibing

    2017-09-01

    Considering that manual inspection of the yarn-dyed fabric can be time consuming and inefficient, we propose a yarn-dyed fabric defect classification method by using a convolutional neural network (CNN) based on a modified AlexNet. CNN shows powerful ability in performing feature extraction and fusion by simulating the learning mechanism of human brain. The local response normalization layers in AlexNet are replaced by the batch normalization layers, which can enhance both the computational efficiency and classification accuracy. In the training process of the network, the characteristics of the defect are extracted step by step and the essential features of the image can be obtained from the fusion of the edge details with several convolution operations. Then the max-pooling layers, the dropout layers, and the fully connected layers are employed in the classification model to reduce the computation cost and extract more precise features of the defective fabric. Finally, the results of the defect classification are predicted by the softmax function. The experimental results show promising performance with an acceptable average classification rate and strong robustness on yarn-dyed fabric defect classification.

  12. On Alternate Relaying with Improper Gaussian Signaling

    KAUST Repository

    Gaafar, Mohamed

    2016-06-06

    In this letter, we investigate the potential benefits of adopting improper Gaussian signaling (IGS) in a two-hop alternate relaying (AR) system. Given the known benefits of using IGS in interference-limited networks, we propose to use IGS to relieve the inter-relay interference (IRI) impact on the AR system assuming no channel state information is available at the source. In this regard, we assume that the two relays use IGS and the source uses proper Gaussian signaling (PGS). Then, we optimize the degree of impropriety of the relays signal, measured by the circularity coefficient, to maximize the total achievable rate. Simulation results show that using IGS yields a significant performance improvement over PGS, especially when the first hop is a bottleneck due to weak source-relay channel gains and/or strong IRI.

  13. On Alternate Relaying with Improper Gaussian Signaling

    KAUST Repository

    Gaafar, Mohamed; Amin, Osama; Ikhlef, Aissa; Chaaban, Anas; Alouini, Mohamed-Slim

    2016-01-01

    In this letter, we investigate the potential benefits of adopting improper Gaussian signaling (IGS) in a two-hop alternate relaying (AR) system. Given the known benefits of using IGS in interference-limited networks, we propose to use IGS to relieve the inter-relay interference (IRI) impact on the AR system assuming no channel state information is available at the source. In this regard, we assume that the two relays use IGS and the source uses proper Gaussian signaling (PGS). Then, we optimize the degree of impropriety of the relays signal, measured by the circularity coefficient, to maximize the total achievable rate. Simulation results show that using IGS yields a significant performance improvement over PGS, especially when the first hop is a bottleneck due to weak source-relay channel gains and/or strong IRI.

  14. Direct Importance Estimation with Gaussian Mixture Models

    Science.gov (United States)

    Yamada, Makoto; Sugiyama, Masashi

    The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.

  15. Fractional Diffusion in Gaussian Noisy Environment

    Directory of Open Access Journals (Sweden)

    Guannan Hu

    2015-03-01

    Full Text Available We study the fractional diffusion in a Gaussian noisy environment as described by the fractional order stochastic heat equations of the following form: \\(D_t^{(\\alpha} u(t, x=\\textit{B}u+u\\cdot \\dot W^H\\, where \\(D_t^{(\\alpha}\\ is the Caputo fractional derivative of order \\(\\alpha\\in (0,1\\ with respect to the time variable \\(t\\, \\(\\textit{B}\\ is a second order elliptic operator with respect to the space variable \\(x\\in\\mathbb{R}^d\\ and \\(\\dot W^H\\ a time homogeneous fractional Gaussian noise of Hurst parameter \\(H=(H_1, \\cdots, H_d\\. We obtain conditions satisfied by \\(\\alpha\\ and \\(H\\, so that the square integrable solution \\(u\\ exists uniquely.

  16. Extended Linear Models with Gaussian Priors

    DEFF Research Database (Denmark)

    Quinonero, Joaquin

    2002-01-01

    In extended linear models the input space is projected onto a feature space by means of an arbitrary non-linear transformation. A linear model is then applied to the feature space to construct the model output. The dimension of the feature space can be very large, or even infinite, giving the model...... a very big flexibility. Support Vector Machines (SVM's) and Gaussian processes are two examples of such models. In this technical report I present a model in which the dimension of the feature space remains finite, and where a Bayesian approach is used to train the model with Gaussian priors...... on the parameters. The Relevance Vector Machine, introduced by Tipping, is a particular case of such a model. I give the detailed derivations of the expectation-maximisation (EM) algorithm used in the training. These derivations are not found in the literature, and might be helpful for newcomers....

  17. Interweave Cognitive Radio with Improper Gaussian Signaling

    KAUST Repository

    Hedhly, Wafa

    2018-01-15

    Improper Gaussian signaling (IGS) has proven its ability in improving the performance of underlay and overlay cognitive radio paradigms. In this paper, the interweave cognitive radio paradigm is studied when the cognitive user employs IGS. The instantaneous achievable rate performance of both the primary and secondary users are analyzed for specific secondary user sensing and detection capabilities. Next, the IGS scheme is optimized to maximize the achievable rate secondary user while satisfying a target minimum rate requirement for the primary user. Proper Gaussian signaling (PGS) scheme design is also derived to be used as benchmark of the IGS scheme design. Finally, different numerical results are introduced to show the gain reaped from adopting IGS over PGS under different system parameters. The main advantage of employing IGS is observed at low sensing and detection capabilities of the SU, lower PU direct link and higher SU interference on the PU side.

  18. Image reconstruction under non-Gaussian noise

    DEFF Research Database (Denmark)

    Sciacchitano, Federica

    During acquisition and transmission, images are often blurred and corrupted by noise. One of the fundamental tasks of image processing is to reconstruct the clean image from a degraded version. The process of recovering the original image from the data is an example of inverse problem. Due...... to the ill-posedness of the problem, the simple inversion of the degradation model does not give any good reconstructions. Therefore, to deal with the ill-posedness it is necessary to use some prior information on the solution or the model and the Bayesian approach. Additive Gaussian noise has been......D thesis intends to solve some of the many open questions for image restoration under non-Gaussian noise. The two main kinds of noise studied in this PhD project are the impulse noise and the Cauchy noise. Impulse noise is due to for instance the malfunctioning pixel elements in the camera sensors, errors...

  19. Combining morphometric features and convolutional networks fusion for glaucoma diagnosis

    Science.gov (United States)

    Perdomo, Oscar; Arevalo, John; González, Fabio A.

    2017-11-01

    Glaucoma is an eye condition that leads to loss of vision and blindness. Ophthalmoscopy exam evaluates the shape, color and proportion between the optic disc and physiologic cup, but the lack of agreement among experts is still the main diagnosis problem. The application of deep convolutional neural networks combined with automatic extraction of features such as: the cup-to-disc distance in the four quadrants, the perimeter, area, eccentricity, the major radio, the minor radio in optic disc and cup, in addition to all the ratios among the previous parameters may help with a better automatic grading of glaucoma. This paper presents a strategy to merge morphological features and deep convolutional neural networks as a novel methodology to support the glaucoma diagnosis in eye fundus images.

  20. Improving deep convolutional neural networks with mixed maxout units.

    Directory of Open Access Journals (Sweden)

    Hui-Zhen Zhao

    Full Text Available Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN that "non-maximal features are unable to deliver" and "feature mapping subspace pooling is insufficient," we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance.

  1. Convolutional over Recurrent Encoder for Neural Machine Translation

    Directory of Open Access Journals (Sweden)

    Dakwale Praveen

    2017-06-01

    Full Text Available Neural machine translation is a recently proposed approach which has shown competitive results to traditional MT approaches. Standard neural MT is an end-to-end neural network where the source sentence is encoded by a recurrent neural network (RNN called encoder and the target words are predicted using another RNN known as decoder. Recently, various models have been proposed which replace the RNN encoder with a convolutional neural network (CNN. In this paper, we propose to augment the standard RNN encoder in NMT with additional convolutional layers in order to capture wider context in the encoder output. Experiments on English to German translation demonstrate that our approach can achieve significant improvements over a standard RNN-based baseline.

  2. Infimal Convolution Regularisation Functionals of BV and Lp Spaces

    KAUST Repository

    Burger, Martin

    2016-02-03

    We study a general class of infimal convolution type regularisation functionals suitable for applications in image processing. These functionals incorporate a combination of the total variation seminorm and Lp norms. A unified well-posedness analysis is presented and a detailed study of the one-dimensional model is performed, by computing exact solutions for the corresponding denoising problem and the case p=2. Furthermore, the dependency of the regularisation properties of this infimal convolution approach to the choice of p is studied. It turns out that in the case p=2 this regulariser is equivalent to the Huber-type variant of total variation regularisation. We provide numerical examples for image decomposition as well as for image denoising. We show that our model is capable of eliminating the staircasing effect, a well-known disadvantage of total variation regularisation. Moreover as p increases we obtain almost piecewise affine reconstructions, leading also to a better preservation of hat-like structures.

  3. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  4. Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks.

    Science.gov (United States)

    Yu, Haiyang; Wu, Zhihai; Wang, Shuqin; Wang, Yunpeng; Ma, Xiaolei

    2017-06-26

    Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs), for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs) and long short-term memory (LSTM) neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction.

  5. Deep learning for steganalysis via convolutional neural networks

    Science.gov (United States)

    Qian, Yinlong; Dong, Jing; Wang, Wei; Tan, Tieniu

    2015-03-01

    Current work on steganalysis for digital images is focused on the construction of complex handcrafted features. This paper proposes a new paradigm for steganalysis to learn features automatically via deep learning models. We novelly propose a customized Convolutional Neural Network for steganalysis. The proposed model can capture the complex dependencies that are useful for steganalysis. Compared with existing schemes, this model can automatically learn feature representations with several convolutional layers. The feature extraction and classification steps are unified under a single architecture, which means the guidance of classification can be used during the feature extraction step. We demonstrate the effectiveness of the proposed model on three state-of-theart spatial domain steganographic algorithms - HUGO, WOW, and S-UNIWARD. Compared to the Spatial Rich Model (SRM), our model achieves comparable performance on BOSSbase and the realistic and large ImageNet database.

  6. ID card number detection algorithm based on convolutional neural network

    Science.gov (United States)

    Zhu, Jian; Ma, Hanjie; Feng, Jie; Dai, Leiyan

    2018-04-01

    In this paper, a new detection algorithm based on Convolutional Neural Network is presented in order to realize the fast and convenient ID information extraction in multiple scenarios. The algorithm uses the mobile device equipped with Android operating system to locate and extract the ID number; Use the special color distribution of the ID card, select the appropriate channel component; Use the image threshold segmentation, noise processing and morphological processing to take the binary processing for image; At the same time, the image rotation and projection method are used for horizontal correction when image was tilting; Finally, the single character is extracted by the projection method, and recognized by using Convolutional Neural Network. Through test shows that, A single ID number image from the extraction to the identification time is about 80ms, the accuracy rate is about 99%, It can be applied to the actual production and living environment.

  7. Trajectory Generation Method with Convolution Operation on Velocity Profile

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Geon [Hanyang Univ., Seoul (Korea, Republic of); Kim, Doik [Korea Institute of Science and Technology, Daejeon (Korea, Republic of)

    2014-03-15

    The use of robots is no longer limited to the field of industrial robots and is now expanding into the fields of service and medical robots. In this light, a trajectory generation method that can respond instantaneously to the external environment is strongly required. Toward this end, this study proposes a method that enables a robot to change its trajectory in real-time using a convolution operation. The proposed method generates a trajectory in real time and satisfies the physical limits of the robot system such as acceleration and velocity limit. Moreover, a new way to improve the previous method, which generates inefficient trajectories in some cases owing to the characteristics of the trapezoidal shape of trajectories, is proposed by introducing a triangle shape. The validity and effectiveness of the proposed method is shown through a numerical simulation and a comparison with the previous convolution method.

  8. Airplane detection in remote sensing images using convolutional neural networks

    Science.gov (United States)

    Ouyang, Chao; Chen, Zhong; Zhang, Feng; Zhang, Yifei

    2018-03-01

    Airplane detection in remote sensing images remains a challenging problem and has also been taking a great interest to researchers. In this paper we propose an effective method to detect airplanes in remote sensing images using convolutional neural networks. Deep learning methods show greater advantages than the traditional methods with the rise of deep neural networks in target detection, and we give an explanation why this happens. To improve the performance on detection of airplane, we combine a region proposal algorithm with convolutional neural networks. And in the training phase, we divide the background into multi classes rather than one class, which can reduce false alarms. Our experimental results show that the proposed method is effective and robust in detecting airplane.

  9. Rock images classification by using deep convolution neural network

    Science.gov (United States)

    Cheng, Guojian; Guo, Wenhui

    2017-08-01

    Granularity analysis is one of the most essential issues in authenticate under microscope. To improve the efficiency and accuracy of traditional manual work, an convolutional neural network based method is proposed for granularity analysis from thin section image, which chooses and extracts features from image samples while build classifier to recognize granularity of input image samples. 4800 samples from Ordos basin are used for experiments under colour spaces of HSV, YCbCr and RGB respectively. On the test dataset, the correct rate in RGB colour space is 98.5%, and it is believable in HSV and YCbCr colour space. The results show that the convolution neural network can classify the rock images with high reliability.

  10. Analysis of multidimensional difference-of-Gaussians filters in terms of directly observable parameters.

    Science.gov (United States)

    Cope, Davis; Blakeslee, Barbara; McCourt, Mark E

    2013-05-01

    The difference-of-Gaussians (DOG) filter is a widely used model for the receptive field of neurons in the retina and lateral geniculate nucleus (LGN) and is a potential model in general for responses modulated by an excitatory center with an inhibitory surrounding region. A DOG filter is defined by three standard parameters: the center and surround sigmas (which define the variance of the radially symmetric Gaussians) and the balance (which defines the linear combination of the two Gaussians). These parameters are not directly observable and are typically determined by nonlinear parameter estimation methods applied to the frequency response function. DOG filters show both low-pass (optimal response at zero frequency) and bandpass (optimal response at a nonzero frequency) behavior. This paper reformulates the DOG filter in terms of a directly observable parameter, the zero-crossing radius, and two new (but not directly observable) parameters. In the two-dimensional parameter space, the exact region corresponding to bandpass behavior is determined. A detailed description of the frequency response characteristics of the DOG filter is obtained. It is also found that the directly observable optimal frequency and optimal gain (the ratio of the response at optimal frequency to the response at zero frequency) provide an alternate coordinate system for the bandpass region. Altogether, the DOG filter and its three standard implicit parameters can be determined by three directly observable values. The two-dimensional bandpass region is a potential tool for the analysis of populations of DOG filters (for example, populations of neurons in the retina or LGN), because the clustering of points in this parameter space may indicate an underlying organizational principle. This paper concentrates on circular Gaussians, but the results generalize to multidimensional radially symmetric Gaussians and are given as an appendix.

  11. Non-Markovianity of Gaussian Channels.

    Science.gov (United States)

    Torre, G; Roga, W; Illuminati, F

    2015-08-14

    We introduce a necessary and sufficient criterion for the non-Markovianity of Gaussian quantum dynamical maps based on the violation of divisibility. The criterion is derived by defining a general vectorial representation of the covariance matrix which is then exploited to determine the condition for the complete positivity of partial maps associated with arbitrary time intervals. Such construction does not rely on the Choi-Jamiolkowski representation and does not require optimization over states.

  12. Log Gaussian Cox processes on the sphere

    DEFF Research Database (Denmark)

    Pacheco, Francisco Andrés Cuevas; Møller, Jesper

    We define and study the existence of log Gaussian Cox processes (LGCPs) for the description of inhomogeneous and aggregated/clustered point patterns on the d-dimensional sphere, with d = 2 of primary interest. Useful theoretical properties of LGCPs are studied and applied for the description of sky...... positions of galaxies, in comparison with previous analysis using a Thomas process. We focus on simple estimation procedures and model checking based on functional summary statistics and the global envelope test....

  13. Recognition of Images Degraded by Gaussian Blur

    Czech Academy of Sciences Publication Activity Database

    Flusser, Jan; Farokhi, Sajad; Höschl, Cyril; Suk, Tomáš; Zitová, Barbara; Pedone, M.

    2016-01-01

    Roč. 25, č. 2 (2016), s. 790-806 ISSN 1057-7149 R&D Projects: GA ČR(CZ) GA15-16928S Institutional support: RVO:67985556 Keywords : Blurred image * object recognition * blur invariant comparison * Gaussian blur * projection operators * image moments * moment invariants Subject RIV: JD - Computer Applications, Robotics Impact factor: 4.828, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/flusser-0454335.pdf

  14. Adaptive multiple importance sampling for Gaussian processes

    Czech Academy of Sciences Publication Activity Database

    Xiong, X.; Šmídl, Václav; Filippone, M.

    2017-01-01

    Roč. 87, č. 8 (2017), s. 1644-1665 ISSN 0094-9655 R&D Projects: GA MŠk(CZ) 7F14287 Institutional support: RVO:67985556 Keywords : Gaussian Process * Bayesian estimation * Adaptive importance sampling Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability Impact factor: 0.757, year: 2016 http://library.utia.cas.cz/separaty/2017/AS/smidl-0469804.pdf

  15. User-generated content curation with deep convolutional neural networks

    OpenAIRE

    Tous Liesa, Rubén; Wust, Otto; Gómez, Mauro; Poveda, Jonatan; Elena, Marc; Torres Viñals, Jordi; Makni, Mouna; Ayguadé Parra, Eduard

    2016-01-01

    In this paper, we report a work consisting in using deep convolutional neural networks (CNNs) for curating and filtering photos posted by social media users (Instagram and Twitter). The final goal is to facilitate searching and discovering user-generated content (UGC) with potential value for digital marketing tasks. The images are captured in real time and automatically annotated with multiple CNNs. Some of the CNNs perform generic object recognition tasks while others perform what we call v...

  16. A quantum algorithm for Viterbi decoding of classical convolutional codes

    OpenAIRE

    Grice, Jon R.; Meyer, David A.

    2014-01-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper the proposed algorithm is applied to decoding classical convolutional codes, for instance; large constraint length $Q$ and short decode frames $N$. Other applications of the classical Viterbi algorithm where $Q$ is large (e.g. speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butter...

  17. Abnormality Detection in Mammography using Deep Convolutional Neural Networks

    OpenAIRE

    Xi, Pengcheng; Shu, Chang; Goubran, Rafik

    2018-01-01

    Breast cancer is the most common cancer in women worldwide. The most common screening technology is mammography. To reduce the cost and workload of radiologists, we propose a computer aided detection approach for classifying and localizing calcifications and masses in mammogram images. To improve on conventional approaches, we apply deep convolutional neural networks (CNN) for automatic feature learning and classifier building. In computer-aided mammography, deep CNN classifiers cannot be tra...

  18. Quantifying Translation-Invariance in Convolutional Neural Networks

    OpenAIRE

    Kauderer-Abrams, Eric

    2017-01-01

    A fundamental problem in object recognition is the development of image representations that are invariant to common transformations such as translation, rotation, and small deformations. There are multiple hypotheses regarding the source of translation invariance in CNNs. One idea is that translation invariance is due to the increasing receptive field size of neurons in successive convolution layers. Another possibility is that invariance is due to the pooling operation. We develop a simple ...

  19. Fast convolutional sparse coding using matrix inversion lemma

    Czech Academy of Sciences Publication Activity Database

    Šorel, Michal; Šroubek, Filip

    2016-01-01

    Roč. 55, č. 1 (2016), s. 44-51 ISSN 1051-2004 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : Convolutional sparse coding * Feature learning * Deconvolution networks * Shift-invariant sparse coding Subject RIV: JD - Computer Applications, Robotics Impact factor: 2.337, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/sorel-0459332.pdf

  20. Learning Convolutional Text Representations for Visual Question Answering

    OpenAIRE

    Wang, Zhengyang; Ji, Shuiwang

    2017-01-01

    Visual question answering is a recently proposed artificial intelligence task that requires a deep understanding of both images and texts. In deep learning, images are typically modeled through convolutional neural networks, and texts are typically modeled through recurrent neural networks. While the requirement for modeling images is similar to traditional computer vision tasks, such as object recognition and image classification, visual question answering raises a different need for textual...

  1. Shallow and deep convolutional networks for saliency prediction

    OpenAIRE

    Pan, Junting; Sayrol Clols, Elisa; Giró Nieto, Xavier; McGuinness, Kevin; O'Connor, Noel

    2016-01-01

    The prediction of salient areas in images has been traditionally addressed with hand-crafted features based on neuroscience principles. This paper, however, addresses the problem with a completely data-driven approach by training a convolutional neural network (convnet). The learning process is formulated as a minimization of a loss function that measures the Euclidean distance of the predicted saliency map with the provided ground truth. The recent publication of large datasets of saliency p...

  2. Production and reception of meaningful sound in Foville's 'encompassing convolution'.

    Science.gov (United States)

    Schiller, F

    1999-04-01

    In the history of neurology. Achille Louis Foville (1799-1879) is a name deserving to be remembered. In the course of time, his circonvolution d'enceinte of 1844 (surrounding the Sylvian fissure) became the 'convolution encompassing' every aspect of aphasiology, including amusia, ie., the localization in a coherent semicircle of semicircle of cerebral cortext serving the production and perception of language, song and instrumental music in health and disease.

  3. Relay Backpropagation for Effective Learning of Deep Convolutional Neural Networks

    OpenAIRE

    Shen, Li; Lin, Zhouchen; Huang, Qingming

    2015-01-01

    Learning deeper convolutional neural networks becomes a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be gained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, that encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015...

  4. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    Science.gov (United States)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  5. General Dirichlet Series, Arithmetic Convolution Equations and Laplace Transforms

    Czech Academy of Sciences Publication Activity Database

    Glöckner, H.; Lucht, L.G.; Porubský, Štefan

    2009-01-01

    Roč. 193, č. 2 (2009), s. 109-129 ISSN 0039-3223 R&D Projects: GA ČR GA201/07/0191 Institutional research plan: CEZ:AV0Z10300504 Keywords : arithmetic function * Dirichlet convolution * polynomial equation * analytic equation * topological algebra * holomorphic functional calculus * implicit function theorem * Laplace transform * semigroup * complex measure Subject RIV: BA - General Mathematics Impact factor: 0.645, year: 2009 http://arxiv.org/abs/0712.3172

  6. Solving singular convolution equations using the inverse fast Fourier transform

    Czech Academy of Sciences Publication Activity Database

    Krajník, E.; Montesinos, V.; Zizler, P.; Zizler, Václav

    2012-01-01

    Roč. 57, č. 5 (2012), s. 543-550 ISSN 0862-7940 R&D Projects: GA AV ČR IAA100190901 Institutional research plan: CEZ:AV0Z10190503 Keywords : singular convolution equations * fast Fourier transform * tempered distribution Subject RIV: BA - General Mathematics Impact factor: 0.222, year: 2012 http://www.springerlink.com/content/m8437t3563214048/

  7. CICAAR - Convolutive ICA with an Auto-Regressive Inverse Model

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Hansen, Lars Kai

    2004-01-01

    We invoke an auto-regressive IIR inverse model for convolutive ICA and derive expressions for the likelihood and its gradient. We argue that optimization will give a stable inverse. When there are more sensors than sources the mixing model parameters are estimated in a second step by least square...... estimation. We demonstrate the method on synthetic data and finally separate speech and music in a real room recording....

  8. Neutron inverse kinetics via Gaussian Processes

    International Nuclear Information System (INIS)

    Picca, Paolo; Furfaro, Roberto

    2012-01-01

    Highlights: ► A novel technique for the interpretation of experiments in ADS is presented. ► The technique is based on Bayesian regression, implemented via Gaussian Processes. ► GPs overcome the limits of classical methods, based on PK approximation. ► Results compares GPs and ANN performance, underlining similarities and differences. - Abstract: The paper introduces the application of Gaussian Processes (GPs) to determine the subcriticality level in accelerator-driven systems (ADSs) through the interpretation of pulsed experiment data. ADSs have peculiar kinetic properties due to their special core design. For this reason, classical – inversion techniques based on point kinetic (PK) generally fail to generate an accurate estimate of reactor subcriticality. Similarly to Artificial Neural Networks (ANNs), Gaussian Processes can be successfully trained to learn the underlying inverse neutron kinetic model and, as such, they are not limited to the model choice. Importantly, GPs are strongly rooted into the Bayes’ theorem which makes them a powerful tool for statistical inference. Here, GPs have been designed and trained on a set of kinetics models (e.g. point kinetics and multi-point kinetics) for homogeneous and heterogeneous settings. The results presented in the paper show that GPs are very efficient and accurate in predicting the reactivity for ADS-like systems. The variance computed via GPs may provide an indication on how to generate additional data as function of the desired accuracy.

  9. Resonant non-Gaussianity with equilateral properties

    International Nuclear Information System (INIS)

    Gwyn, Rhiannon; Rummel, Markus

    2012-11-01

    We discuss the effect of superimposing multiple sources of resonant non-Gaussianity, which arise for instance in models of axion inflation. The resulting sum of oscillating shape contributions can be used to ''Fourier synthesize'' different non-oscillating shapes in the bispectrum. As an example we reproduce an approximately equilateral shape from the superposition of O(10) oscillatory contributions with resonant shape. This implies a possible degeneracy between the equilateral-type non-Gaussianity typical of models with non-canonical kinetic terms, such as DBI inflation, and an equilateral-type shape arising from a superposition of resonant-type contributions in theories with canonical kinetic terms. The absence of oscillations in the 2-point function together with the structure of the resonant N-point functions, imply that detection of equilateral non-Gaussianity at a level greater than the PLANCK sensitivity of f NL ∝O(5) will rule out a resonant origin. We comment on the questions arising from possible embeddings of this idea in a string theory setting.

  10. Unitarily localizable entanglement of Gaussian states

    International Nuclear Information System (INIS)

    Serafini, Alessio; Adesso, Gerardo; Illuminati, Fabrizio

    2005-01-01

    We consider generic (mxn)-mode bipartitions of continuous-variable systems, and study the associated bisymmetric multimode Gaussian states. They are defined as (m+n)-mode Gaussian states invariant under local mode permutations on the m-mode and n-mode subsystems. We prove that such states are equivalent, under local unitary transformations, to the tensor product of a two-mode state and of m+n-2 uncorrelated single-mode states. The entanglement between the m-mode and the n-mode blocks can then be completely concentrated on a single pair of modes by means of local unitary operations alone. This result allows us to prove that the PPT (positivity of the partial transpose) condition is necessary and sufficient for the separability of (m+n)-mode bisymmetric Gaussian states. We determine exactly their negativity and identify a subset of bisymmetric states whose multimode entanglement of formation can be computed analytically. We consider explicit examples of pure and mixed bisymmetric states and study their entanglement scaling with the number of modes

  11. Gaussian Hypothesis Testing and Quantum Illumination.

    Science.gov (United States)

    Wilde, Mark M; Tomamichel, Marco; Lloyd, Seth; Berta, Mario

    2017-09-22

    Quantum hypothesis testing is one of the most basic tasks in quantum information theory and has fundamental links with quantum communication and estimation theory. In this paper, we establish a formula that characterizes the decay rate of the minimal type-II error probability in a quantum hypothesis test of two Gaussian states given a fixed constraint on the type-I error probability. This formula is a direct function of the mean vectors and covariance matrices of the quantum Gaussian states in question. We give an application to quantum illumination, which is the task of determining whether there is a low-reflectivity object embedded in a target region with a bright thermal-noise bath. For the asymmetric-error setting, we find that a quantum illumination transmitter can achieve an error probability exponent stronger than a coherent-state transmitter of the same mean photon number, and furthermore, that it requires far fewer trials to do so. This occurs when the background thermal noise is either low or bright, which means that a quantum advantage is even easier to witness than in the symmetric-error setting because it occurs for a larger range of parameters. Going forward from here, we expect our formula to have applications in settings well beyond those considered in this paper, especially to quantum communication tasks involving quantum Gaussian channels.

  12. Resonant non-Gaussianity with equilateral properties

    Energy Technology Data Exchange (ETDEWEB)

    Gwyn, Rhiannon [Max-Planck-Institut fuer Gravitationsphysik (Albert-Einstein-Institut), Potsdam (Germany); Rummel, Markus [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Westphal, Alexander [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2012-11-15

    We discuss the effect of superimposing multiple sources of resonant non-Gaussianity, which arise for instance in models of axion inflation. The resulting sum of oscillating shape contributions can be used to ''Fourier synthesize'' different non-oscillating shapes in the bispectrum. As an example we reproduce an approximately equilateral shape from the superposition of O(10) oscillatory contributions with resonant shape. This implies a possible degeneracy between the equilateral-type non-Gaussianity typical of models with non-canonical kinetic terms, such as DBI inflation, and an equilateral-type shape arising from a superposition of resonant-type contributions in theories with canonical kinetic terms. The absence of oscillations in the 2-point function together with the structure of the resonant N-point functions, imply that detection of equilateral non-Gaussianity at a level greater than the PLANCK sensitivity of f{sub NL} {proportional_to}O(5) will rule out a resonant origin. We comment on the questions arising from possible embeddings of this idea in a string theory setting.

  13. Gaussian Process-Mixture Conditional Heteroscedasticity.

    Science.gov (United States)

    Platanios, Emmanouil A; Chatzis, Sotirios P

    2014-05-01

    Generalized autoregressive conditional heteroscedasticity (GARCH) models have long been considered as one of the most successful families of approaches for volatility modeling in financial return series. In this paper, we propose an alternative approach based on methodologies widely used in the field of statistical machine learning. Specifically, we propose a novel nonparametric Bayesian mixture of Gaussian process regression models, each component of which models the noise variance process that contaminates the observed data as a separate latent Gaussian process driven by the observed data. This way, we essentially obtain a Gaussian process-mixture conditional heteroscedasticity (GPMCH) model for volatility modeling in financial return series. We impose a nonparametric prior with power-law nature over the distribution of the model mixture components, namely the Pitman-Yor process prior, to allow for better capturing modeled data distributions with heavy tails and skewness. Finally, we provide a copula-based approach for obtaining a predictive posterior for the covariances over the asset returns modeled by means of a postulated GPMCH model. We evaluate the efficacy of our approach in a number of benchmark scenarios, and compare its performance to state-of-the-art methodologies.

  14. Non-Gaussian conductivity fluctuations in semiconductors

    International Nuclear Information System (INIS)

    Melkonyan, S.V.

    2010-01-01

    A theoretical study is presented on the statistical properties of conductivity fluctuations caused by concentration and mobility fluctuations of the current carriers. It is established that mobility fluctuations result from random deviations in the thermal equilibrium distribution of the carriers. It is shown that mobility fluctuations have generation-recombination and shot components which do not satisfy the requirements of the central limit theorem, in contrast to the current carrier's concentration fluctuation and intraband component of the mobility fluctuation. It is shown that in general the mobility fluctuation consist of thermal (or intraband) Gaussian and non-thermal (or generation-recombination, shot, etc.) non-Gaussian components. The analyses of theoretical results and experimental data from literature show that the statistical properties of mobility fluctuation and of 1/f-noise fully coincide. The deviation from Gaussian statistics of the mobility or 1/f fluctuations goes hand in hand with the magnitude of non-thermal noise (generation-recombination, shot, burst, pulse noises, etc.).

  15. Perturbative Gaussianizing transforms for cosmological fields

    Science.gov (United States)

    Hall, Alex; Mead, Alexander

    2018-01-01

    Constraints on cosmological parameters from large-scale structure have traditionally been obtained from two-point statistics. However, non-linear structure formation renders these statistics insufficient in capturing the full information content available, necessitating the measurement of higher order moments to recover information which would otherwise be lost. We construct quantities based on non-linear and non-local transformations of weakly non-Gaussian fields that Gaussianize the full multivariate distribution at a given order in perturbation theory. Our approach does not require a model of the fields themselves and takes as input only the first few polyspectra, which could be modelled or measured from simulations or data, making our method particularly suited to observables lacking a robust perturbative description such as the weak-lensing shear. We apply our method to simulated density fields, finding a significantly reduced bispectrum and an enhanced correlation with the initial field. We demonstrate that our method reconstructs a large proportion of the linear baryon acoustic oscillations, improving the information content over the raw field by 35 per cent. We apply the transform to toy 21 cm intensity maps, showing that our method still performs well in the presence of complications such as redshift-space distortions, beam smoothing, pixel noise and foreground subtraction. We discuss how this method might provide a route to constructing a perturbative model of the fully non-Gaussian multivariate likelihood function.

  16. AFM tip-sample convolution effects for cylinder protrusions

    Science.gov (United States)

    Shen, Jian; Zhang, Dan; Zhang, Fei-Hu; Gan, Yang

    2017-11-01

    A thorough understanding about the AFM tip geometry dependent artifacts and tip-sample convolution effect is essential for reliable AFM topographic characterization and dimensional metrology. Using rigid sapphire cylinder protrusions (diameter: 2.25 μm, height: 575 nm) as the model system, a systematic and quantitative study about the imaging artifacts of four types of tips-two different pyramidal tips, one tetrahedral tip and one super sharp whisker tip-is carried out through comparing tip geometry dependent variations in AFM topography of cylinders and constructing the rigid tip-cylinder convolution models. We found that the imaging artifacts and the tip-sample convolution effect are critically related to the actual inclination of the working cantilever, the tip geometry, and the obstructive contacts between the working tip's planes/edges and the cylinder. Artifact-free images can only be obtained provided that all planes and edges of the working tip are steeper than the cylinder sidewalls. The findings reported here will contribute to reliable AFM characterization of surface features of micron or hundreds of nanometers in height that are frequently met in semiconductor, biology and materials fields.

  17. Edgeworth Expansion Based Model for the Convolutional Noise pdf

    Directory of Open Access Journals (Sweden)

    Yonatan Rivlin

    2014-01-01

    Full Text Available Recently, the Edgeworth expansion up to order 4 was used to represent the convolutional noise probability density function (pdf in the conditional expectation calculations where the source pdf was modeled with the maximum entropy density approximation technique. However, the applied Lagrange multipliers were not the appropriate ones for the chosen model for the convolutional noise pdf. In this paper we use the Edgeworth expansion up to order 4 and up to order 6 to model the convolutional noise pdf. We derive the appropriate Lagrange multipliers, thus obtaining new closed-form approximated expressions for the conditional expectation and mean square error (MSE as a byproduct. Simulation results indicate hardly any equalization improvement with Edgeworth expansion up to order 4 when using optimal Lagrange multipliers over a nonoptimal set. In addition, there is no justification for using the Edgeworth expansion up to order 6 over the Edgeworth expansion up to order 4 for the 16QAM and easy channel case. However, Edgeworth expansion up to order 6 leads to improved equalization performance compared to the Edgeworth expansion up to order 4 for the 16QAM and hard channel case as well as for the case where the 64QAM is sent via an easy channel.

  18. Traffic sign recognition based on deep convolutional neural network

    Science.gov (United States)

    Yin, Shi-hao; Deng, Ji-cai; Zhang, Da-wei; Du, Jing-yuan

    2017-11-01

    Traffic sign recognition (TSR) is an important component of automated driving systems. It is a rather challenging task to design a high-performance classifier for the TSR system. In this paper, we propose a new method for TSR system based on deep convolutional neural network. In order to enhance the expression of the network, a novel structure (dubbed block-layer below) which combines network-in-network and residual connection is designed. Our network has 10 layers with parameters (block-layer seen as a single layer): the first seven are alternate convolutional layers and block-layers, and the remaining three are fully-connected layers. We train our TSR network on the German traffic sign recognition benchmark (GTSRB) dataset. To reduce overfitting, we perform data augmentation on the training images and employ a regularization method named "dropout". The activation function we employ in our network adopts scaled exponential linear units (SELUs), which can induce self-normalizing properties. To speed up the training, we use an efficient GPU to accelerate the convolutional operation. On the test dataset of GTSRB, we achieve the accuracy rate of 99.67%, exceeding the state-of-the-art results.

  19. Face recognition via Gabor and convolutional neural network

    Science.gov (United States)

    Lu, Tongwei; Wu, Menglu; Lu, Tao

    2018-04-01

    In recent years, the powerful feature learning and classification ability of convolutional neural network have attracted widely attention. Compared with the deep learning, the traditional machine learning algorithm has a good explanatory which deep learning does not have. Thus, In this paper, we propose a method to extract the feature of the traditional algorithm as the input of convolution neural network. In order to reduce the complexity of the network, the kernel function of Gabor wavelet is used to extract the feature from different position, frequency and direction of target image. It is sensitive to edge of image which can provide good direction and scale selection. The extraction of the image from eight directions on a scale are as the input of network that we proposed. The network have the advantage of weight sharing and local connection and texture feature of the input image can reduce the influence of facial expression, gesture and illumination. At the same time, we introduced a layer which combined the results of the pooling and convolution can extract deeper features. The training network used the open source caffe framework which is beneficial to feature extraction. The experiment results of the proposed method proved that the network structure effectively overcame the barrier of illumination and had a good robustness as well as more accurate and rapid than the traditional algorithm.

  20. Nuclear norm regularized convolutional Max Pos@Top machine

    KAUST Repository

    Li, Qinfeng

    2016-11-18

    In this paper, we propose a novel classification model for the multiple instance data, which aims to maximize the number of positive instances ranked before the top-ranked negative instances. This method belongs to a recently emerged performance, named as Pos@Top. Our proposed classification model has a convolutional structure that is composed by four layers, i.e., the convolutional layer, the activation layer, the max-pooling layer and the full connection layer. In this paper, we propose an algorithm to learn the convolutional filters and the full connection weights to maximize the Pos@Top measure over the training set. Also, we try to minimize the rank of the filter matrix to explore the low-dimensional space of the instances in conjunction with the classification results. The rank minimization is conducted by the nuclear norm minimization of the filter matrix. In addition, we develop an iterative algorithm to solve the corresponding problem. We test our method on several benchmark datasets. The experimental results show the superiority of our method compared with other state-of-the-art Pos@Top maximization methods.

  1. MINIMUM ENTROPY DECONVOLUTION OF ONE-AND MULTI-DIMENSIONAL NON-GAUSSIAN LINEAR RANDOM PROCESSES

    Institute of Scientific and Technical Information of China (English)

    程乾生

    1990-01-01

    The minimum entropy deconvolution is considered as one of the methods for decomposing non-Gaussian linear processes. The concept of peakedness of a system response sequence is presented and its properties are studied. With the aid of the peakedness, the convergence theory of the minimum entropy deconvolution is established. The problem of the minimum entropy deconvolution of multi-dimensional non-Gaussian linear random processes is first investigated and the corresponding theory is given. In addition, the relation between the minimum entropy deconvolution and parameter method is discussed.

  2. Searching for non-Gaussianity in the WMAP data

    International Nuclear Information System (INIS)

    Bernui, A.; Reboucas, M. J.

    2009-01-01

    Some analyses of recent cosmic microwave background (CMB) data have provided hints that there are deviations from Gaussianity in the WMAP CMB temperature fluctuations. Given the far-reaching consequences of such a non-Gaussianity for our understanding of the physics of the early universe, it is important to employ alternative indicators in order to determine whether the reported non-Gaussianity is of cosmological origin, and/or extract further information that may be helpful for identifying its causes. We propose two new non-Gaussianity indicators, based on skewness and kurtosis of large-angle patches of CMB maps, which provide a measure of departure from Gaussianity on large angular scales. A distinctive feature of these indicators is that they provide sky maps of non-Gaussianity of the CMB temperature data, thus allowing a possible additional window into their origins. Using these indicators, we find no significant deviation from Gaussianity in the three and five-year WMAP Internal Linear Combination (ILC) map with KQ75 mask, while the ILC unmasked map exhibits deviation from Gaussianity, quantifying therefore the WMAP team recommendation to employ the new mask KQ75 for tests of Gaussianity. We also use our indicators to test for Gaussianity the single frequency foreground unremoved WMAP three and five-year maps, and show that the K and Ka maps exhibit a clear indication of deviation from Gaussianity even with the KQ75 mask. We show that our findings are robust with respect to the details of the method.

  3. Agile convolutional neural network for pulmonary nodule classification using CT images.

    Science.gov (United States)

    Zhao, Xinzhuo; Liu, Liyao; Qi, Shouliang; Teng, Yueyang; Li, Jianhua; Qian, Wei

    2018-04-01

    To distinguish benign from malignant pulmonary nodules using CT images is critical for their precise diagnosis and treatment. A new Agile convolutional neural network (CNN) framework is proposed to conquer the challenges of a small-scale medical image database and the small size of the nodules, and it improves the performance of pulmonary nodule classification using CT images. A hybrid CNN of LeNet and AlexNet is constructed through combining the layer settings of LeNet and the parameter settings of AlexNet. A dataset with 743 CT image nodule samples is built up based on the 1018 CT scans of LIDC to train and evaluate the Agile CNN model. Through adjusting the parameters of the kernel size, learning rate, and other factors, the effect of these parameters on the performance of the CNN model is investigated, and an optimized setting of the CNN is obtained finally. After finely optimizing the settings of the CNN, the estimation accuracy and the area under the curve can reach 0.822 and 0.877, respectively. The accuracy of the CNN is significantly dependent on the kernel size, learning rate, training batch size, dropout, and weight initializations. The best performance is achieved when the kernel size is set to [Formula: see text], the learning rate is 0.005, the batch size is 32, and dropout and Gaussian initialization are used. This competitive performance demonstrates that our proposed CNN framework and the optimization strategy of the CNN parameters are suitable for pulmonary nodule classification characterized by small medical datasets and small targets. The classification model might help diagnose and treat pulmonary nodules effectively.

  4. Gaussian capacity of the quantum bosonic memory channel with additive correlated Gaussian noise

    International Nuclear Information System (INIS)

    Schaefer, Joachim; Karpov, Evgueni; Cerf, Nicolas J.

    2011-01-01

    We present an algorithm for calculation of the Gaussian classical capacity of a quantum bosonic memory channel with additive Gaussian noise. The algorithm, restricted to Gaussian input states, is applicable to all channels with noise correlations obeying certain conditions and works in the full input energy domain, beyond previous treatments of this problem. As an illustration, we study the optimal input states and capacity of a quantum memory channel with Gauss-Markov noise [J. Schaefer, Phys. Rev. A 80, 062313 (2009)]. We evaluate the enhancement of the transmission rate when using these optimal entangled input states by comparison with a product coherent-state encoding and find out that such a simple coherent-state encoding achieves not less than 90% of the capacity.

  5. Convolution equations on lattices: periodic solutions with values in a prime characteristic field

    OpenAIRE

    Zaidenberg, Mikhail

    2006-01-01

    These notes are inspired by the theory of cellular automata. A linear cellular automaton on a lattice of finite rank or on a toric grid is a discrete dinamical system generated by a convolution operator with kernel concentrated in the nearest neighborhood of the origin. In the present paper we deal with general convolution operators. We propose an approach via harmonic analysis which works over a field of positive characteristic. It occurs that a standard spectral problem for a convolution op...

  6. High-Order Local Pooling and Encoding Gaussians Over a Dictionary of Gaussians.

    Science.gov (United States)

    Li, Peihua; Zeng, Hui; Wang, Qilong; Shiu, Simon C K; Zhang, Lei

    2017-07-01

    Local pooling (LP) in configuration (feature) space proposed by Boureau et al. explicitly restricts similar features to be aggregated, which can preserve as much discriminative information as possible. At the time it appeared, this method combined with sparse coding achieved competitive classification results with only a small dictionary. However, its performance lags far behind the state-of-the-art results as only the zero-order information is exploited. Inspired by the success of high-order statistical information in existing advanced feature coding or pooling methods, we make an attempt to address the limitation of LP. To this end, we present a novel method called high-order LP (HO-LP) to leverage the information higher than the zero-order one. Our idea is intuitively simple: we compute the first- and second-order statistics per configuration bin and model them as a Gaussian. Accordingly, we employ a collection of Gaussians as visual words to represent the universal probability distribution of features from all classes. Our problem is naturally formulated as encoding Gaussians over a dictionary of Gaussians as visual words. This problem, however, is challenging since the space of Gaussians is not a Euclidean space but forms a Riemannian manifold. We address this challenge by mapping Gaussians into the Euclidean space, which enables us to perform coding with common Euclidean operations rather than complex and often expensive Riemannian operations. Our HO-LP preserves the advantages of the original LP: pooling only similar features and using a small dictionary. Meanwhile, it achieves very promising performance on standard benchmarks, with either conventional, hand-engineered features or deep learning-based features.

  7. Area of isodensity contours in Gaussian and non-Gaussian fields

    International Nuclear Information System (INIS)

    Ryden, B.S.

    1988-01-01

    The area of isodensity contours in a smoothed density field can be measured by the contour-crossing statistic N1, the number of times per unit length that a line drawn through the density field pierces an isodensity contour. The contour-crossing statistic distinguishes between Gaussian and non-Gaussian fields and provides a measure of the effective slope of the power spectrum. The statistic is easy to apply and can be used on pencil beams and slices as well as on a three-dimensional field. 10 references

  8. Stochastic differential calculus for Gaussian and non-Gaussian noises: A critical review

    Science.gov (United States)

    Falsone, G.

    2018-03-01

    In this paper a review of the literature works devoted to the study of stochastic differential equations (SDEs) subjected to Gaussian and non-Gaussian white noises and to fractional Brownian noises is given. In these cases, particular attention must be paid in treating the SDEs because the classical rules of the differential calculus, as the Newton-Leibnitz one, cannot be applied or are applicable with many difficulties. Here all the principal approaches solving the SDEs are reported for any kind of noise, highlighting the negative and positive properties of each one and making the comparisons, where it is possible.

  9. A comparison on the propagation characteristics of focused Gaussian beam and fundamental Gaussian beam in vacuum

    International Nuclear Information System (INIS)

    Liu Shixiong; Guo Hong; Liu Mingwei; Wu Guohua

    2004-01-01

    Propagation characteristics of focused Gaussian beam (FoGB) and fundamental Gaussian beam (FuGB) propagating in vacuum are investigated. Based on the Fourier transform and the angular spectral analysis, the transverse component and the second-order approximate longitudinal component of the electric field are obtained in the paraxial approximation. The electric field components, the phase velocity and the group velocity of FoGB are compared with those of FuGB. The spot size of FoGB is also discussed

  10. Calculating emittance for Gaussian and Non-Gaussian distributions by the method of correlations for slits

    International Nuclear Information System (INIS)

    Tan, Cheng-Yang; Fermilab

    2006-01-01

    One common way for measuring the emittance of an electron beam is with the slits method. The usual approach for analyzing the data is to calculate an emittance that is a subset of the parent emittance. This paper shows an alternative way by using the method of correlations which ties the parameters derived from the beamlets to the actual parameters of the parent emittance. For parent distributions that are Gaussian, this method yields exact results. For non-Gaussian beam distributions, this method yields an effective emittance that can serve as a yardstick for emittance comparisons

  11. Performance Analysis of DPSK Signals with Selection Combining and Convolutional Coding in Fading Channel

    National Research Council Canada - National Science Library

    Ong, Choon

    1998-01-01

    The performance analysis of a differential phase shift keyed (DPSK) communications system, operating in a Rayleigh fading environment, employing convolutional coding and diversity processing is presented...

  12. Model selection for convolutive ICA with an application to spatiotemporal analysis of EEG

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2007-01-01

    We present a new algorithm for maximum likelihood convolutive independent component analysis (ICA) in which components are unmixed using stable autoregressive filters determined implicitly by estimating a convolutive model of the mixing process. By introducing a convolutive mixing model...... for the components, we show how the order of the filters in the model can be correctly detected using Bayesian model selection. We demonstrate a framework for deconvolving a subspace of independent components in electroencephalography (EEG). Initial results suggest that in some cases, convolutive mixing may...

  13. Non-Gaussianity from Broken Symmetries

    CERN Document Server

    Kolb, Edward W; Vallinotto, A; Kolb, Edward W.; Riotto, Antonio; Vallinotto, Alberto

    2006-01-01

    Recently we studied inflation models in which the inflaton potential is characterized by an underlying approximate global symmetry. In the first work we pointed out that in such a model curvature perturbations are generated after the end of the slow-roll phase of inflation. In this work we develop further the observational implications of the model and compute the degree of non-Gaussianity predicted in the scenario. We find that the corresponding nonlinearity parameter, $f_{NL}$, can be as large as 10^2.

  14. First Passage Time Intervals of Gaussian Processes

    Science.gov (United States)

    Perez, Hector; Kawabata, Tsutomu; Mimaki, Tadashi

    1987-08-01

    The first passage time problem of a stationary Guassian process is theretically and experimentally studied. Renewal functions are derived for a time-dependent boundary and numerically calculated for a Gaussian process having a seventh-order Butterworth spectrum. The results show a multipeak property not only for the constant boundary but also for a linearly increasing boundary. The first passage time distribution densities were experimentally determined for a constant boundary. The renewal functions were shown to be a fairly good approximation to the distribution density over a limited range.

  15. CMB constraints on running non-Gaussianity

    OpenAIRE

    Oppizzi, Filippo; Liguori, Michele; Renzi, Alessandro; Arroja, Frederico; Bartolo, Nicola

    2017-01-01

    We develop a complete set of tools for CMB forecasting, simulation and estimation of primordial running bispectra, arising from a variety of curvaton and single-field (DBI) models of Inflation. We validate our pipeline using mock CMB running non-Gaussianity realizations and test it on real data by obtaining experimental constraints on the $f_{\\rm NL}$ running spectral index, $n_{\\rm NG}$, using WMAP 9-year data. Our final bounds (68\\% C.L.) read $-0.3< n_{\\rm NG}

  16. Turbo Equalization Using Partial Gaussian Approximation

    DEFF Research Database (Denmark)

    Zhang, Chuanzong; Wang, Zhongyong; Manchón, Carles Navarro

    2016-01-01

    This letter deals with turbo equalization for coded data transmission over intersymbol interference (ISI) channels. We propose a message-passing algorithm that uses the expectation propagation rule to convert messages passed from the demodulator and decoder to the equalizer and computes messages...... returned by the equalizer by using a partial Gaussian approximation (PGA). We exploit the specific structure of the ISI channel model to compute the latter messages from the beliefs obtained using a Kalman smoother/equalizer. Doing so leads to a significant complexity reduction compared to the initial PGA...

  17. Optical trapping with Super-Gaussian beams

    CSIR Research Space (South Africa)

    Mc

    2013-04-01

    Full Text Available stream_source_info McLaren1_2013.pdf.txt stream_content_type text/plain stream_size 2236 Content-Encoding UTF-8 stream_name McLaren1_2013.pdf.txt Content-Type text/plain; charset=UTF-8 JT2A.34.pdf Optics in the Life... Sciences Congress Technical Digest © 2013 The Optical Society (OSA) Optical trapping with Super-Gaussian beams Melanie McLaren, Thulile Khanyile, Patience Mthunzi and Andrew Forbes* National Laser Centre, Council for Scientific and Industrial Research...

  18. Bregman Cost for Non-Gaussian Noise

    DEFF Research Database (Denmark)

    Burger, Martin; Dong, Yiqiu; Sciacchitano, Federica

    estimator for the Bregman cost if the image is corrupted by Gaussian noise. In this work we extend this result to other noise models with log-concave likelihood density, by introducing two related Bregman cost functions for which the CM and the MAP estimates are proper Bayes estima-tors. Moreover, we also....... From a theoretical point of view it has been argued that the MAP estimate is only in an asymptotic sense a Bayes estimator for the uniform cost function, while the CM estimate is a Bayes estimator for the means squared cost function. Recently, it has been proven that the MAP estimate is a proper Bayes...

  19. Bioprinting of 3D Convoluted Renal Proximal Tubules on Perfusable Chips

    Science.gov (United States)

    Homan, Kimberly A.; Kolesky, David B.; Skylar-Scott, Mark A.; Herrmann, Jessica; Obuobi, Humphrey; Moisan, Annie; Lewis, Jennifer A.

    2016-10-01

    Three-dimensional models of kidney tissue that recapitulate human responses are needed for drug screening, disease modeling, and, ultimately, kidney organ engineering. Here, we report a bioprinting method for creating 3D human renal proximal tubules in vitro that are fully embedded within an extracellular matrix and housed in perfusable tissue chips, allowing them to be maintained for greater than two months. Their convoluted tubular architecture is circumscribed by proximal tubule epithelial cells and actively perfused through the open lumen. These engineered 3D proximal tubules on chip exhibit significantly enhanced epithelial morphology and functional properties relative to the same cells grown on 2D controls with or without perfusion. Upon introducing the nephrotoxin, Cyclosporine A, the epithelial barrier is disrupted in a dose-dependent manner. Our bioprinting method provides a new route for programmably fabricating advanced human kidney tissue models on demand.

  20. Out-of-equilibrium dynamics in a Gaussian trap model

    International Nuclear Information System (INIS)

    Diezemann, Gregor

    2007-01-01

    The violations of the fluctuation-dissipation theorem are analysed for a trap model with a Gaussian density of states. In this model, the system reaches thermal equilibrium for long times after a quench to any finite temperature and therefore all ageing effect are of a transient nature. For not too long times after the quench it is found that the so-called fluctuation-dissipation ratio tends to a non-trivial limit, thus indicating the possibility for the definition of a timescale-dependent effective temperature. However, different definitions of the effective temperature yield distinct results. In particular, plots of the integrated response versus the correlation function strongly depend on the way they are constructed. Also the definition of effective temperatures in the frequency domain is not unique for the model considered. This may have some implications for the interpretation of results from computer simulations and experimental determinations of effective temperatures

  1. Reducing weight precision of convolutional neural networks towards large-scale on-chip image recognition

    Science.gov (United States)

    Ji, Zhengping; Ovsiannikov, Ilia; Wang, Yibing; Shi, Lilong; Zhang, Qiang

    2015-05-01

    In this paper, we develop a server-client quantization scheme to reduce bit resolution of deep learning architecture, i.e., Convolutional Neural Networks, for image recognition tasks. Low bit resolution is an important factor in bringing the deep learning neural network into hardware implementation, which directly determines the cost and power consumption. We aim to reduce the bit resolution of the network without sacrificing its performance. To this end, we design a new quantization algorithm called supervised iterative quantization to reduce the bit resolution of learned network weights. In the training stage, the supervised iterative quantization is conducted via two steps on server - apply k-means based adaptive quantization on learned network weights and retrain the network based on quantized weights. These two steps are alternated until the convergence criterion is met. In this testing stage, the network configuration and low-bit weights are loaded to the client hardware device to recognize coming input in real time, where optimized but expensive quantization becomes infeasible. Considering this, we adopt a uniform quantization for the inputs and internal network responses (called feature maps) to maintain low on-chip expenses. The Convolutional Neural Network with reduced weight and input/response precision is demonstrated in recognizing two types of images: one is hand-written digit images and the other is real-life images in office scenarios. Both results show that the new network is able to achieve the performance of the neural network with full bit resolution, even though in the new network the bit resolution of both weight and input are significantly reduced, e.g., from 64 bits to 4-5 bits.

  2. Stochastic dynamic analysis of marine risers considering Gaussian system uncertainties

    Science.gov (United States)

    Ni, Pinghe; Li, Jun; Hao, Hong; Xia, Yong

    2018-03-01

    This paper performs the stochastic dynamic response analysis of marine risers with material uncertainties, i.e. in the mass density and elastic modulus, by using Stochastic Finite Element Method (SFEM) and model reduction technique. These uncertainties are assumed having Gaussian distributions. The random mass density and elastic modulus are represented by using the Karhunen-Loève (KL) expansion. The Polynomial Chaos (PC) expansion is adopted to represent the vibration response because the covariance of the output is unknown. Model reduction based on the Iterated Improved Reduced System (IIRS) technique is applied to eliminate the PC coefficients of the slave degrees of freedom to reduce the dimension of the stochastic system. Monte Carlo Simulation (MCS) is conducted to obtain the reference response statistics. Two numerical examples are studied in this paper. The response statistics from the proposed approach are compared with those from MCS. It is noted that the computational time is significantly reduced while the accuracy is kept. The results demonstrate the efficiency of the proposed approach for stochastic dynamic response analysis of marine risers.

  3. Gaussian-log-Gaussian wavelet trees, frequentist and Bayesian inference, and statistical signal processing applications

    DEFF Research Database (Denmark)

    Møller, Jesper; Jacobsen, Robert Dahl

    We introduce a promising alternative to the usual hidden Markov tree model for Gaussian wavelet coefficients, where their variances are specified by the hidden states and take values in a finite set. In our new model, the hidden states have a similar dependence structure but they are jointly Gaus...

  4. Frequentist and Bayesian inference for Gaussian-log-Gaussian wavelet trees and statistical signal processing applications

    DEFF Research Database (Denmark)

    Jacobsen, Christian Robert Dahl; Møller, Jesper

    2017-01-01

    We introduce new estimation methods for a subclass of the Gaussian scale mixture models for wavelet trees by Wainwright, Simoncelli and Willsky that rely on modern results for composite likelihoods and approximate Bayesian inference. Our methodology is illustrated for denoising and edge detection...

  5. Approximation problems with the divergence criterion for Gaussian variablesand Gaussian processes

    NARCIS (Netherlands)

    A.A. Stoorvogel; J.H. van Schuppen (Jan)

    1996-01-01

    textabstractSystem identification for stationary Gaussian processes includes an approximation problem. Currently the subspace algorithm for this problem enjoys much attention. This algorithm is based on a transformation of a finite time series to canonical variable form followed by a truncation.

  6. Comparison of Gaussian and non-Gaussian Atmospheric Profile Retrievals from Satellite Microwave Data

    Science.gov (United States)

    Kliewer, A.; Forsythe, J. M.; Fletcher, S. J.; Jones, A. S.

    2017-12-01

    The Cooperative Institute for Research in the Atmosphere at Colorado State University has recently developed two different versions of a mixed-distribution (lognormal combined with a Gaussian) based microwave temperature and mixing ratio retrieval system as well as the original Gaussian-based approach. These retrieval systems are based upon 1DVAR theory but have been adapted to use different descriptive statistics of the lognormal distribution to minimize the background errors. The input radiance data is from the AMSU-A and MHS instruments on the NOAA series of spacecraft. To help illustrate how the three retrievals are affected by the change in the distribution we are in the process of creating a new website to show the output from the different retrievals. Here we present initial results from different dynamical situations to show how the tool could be used by forecasters as well as for educators. However, as the new retrieved values are from a non-Gaussian based 1DVAR then they will display non-Gaussian behaviors that need to pass a quality control measure that is consistent with this distribution, and these new measures are presented here along with initial results for checking the retrievals.

  7. Finding strong lenses in CFHTLS using convolutional neural networks

    Science.gov (United States)

    Jacobs, C.; Glazebrook, K.; Collett, T.; More, A.; McCarthy, C.

    2017-10-01

    We train and apply convolutional neural networks, a machine learning technique developed to learn from and classify image data, to Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) imaging for the identification of potential strong lensing systems. An ensemble of four convolutional neural networks was trained on images of simulated galaxy-galaxy lenses. The training sets consisted of a total of 62 406 simulated lenses and 64 673 non-lens negative examples generated with two different methodologies. An ensemble of trained networks was applied to all of the 171 deg2 of the CFHTLS wide field image data, identifying 18 861 candidates including 63 known and 139 other potential lens candidates. A second search of 1.4 million early-type galaxies selected from the survey catalogue as potential deflectors, identified 2465 candidates including 117 previously known lens candidates, 29 confirmed lenses/high-quality lens candidates, 266 novel probable or potential lenses and 2097 candidates we classify as false positives. For the catalogue-based search we estimate a completeness of 21-28 per cent with respect to detectable lenses and a purity of 15 per cent, with a false-positive rate of 1 in 671 images tested. We predict a human astronomer reviewing candidates produced by the system would identify 20 probable lenses and 100 possible lenses per hour in a sample selected by the robot. Convolutional neural networks are therefore a promising tool for use in the search for lenses in current and forthcoming surveys such as the Dark Energy Survey and the Large Synoptic Survey Telescope.

  8. Functional Dual Adaptive Control with Recursive Gaussian Process Model

    International Nuclear Information System (INIS)

    Prüher, Jakub; Král, Ladislav

    2015-01-01

    The paper deals with dual adaptive control problem, where the functional uncertainties in the system description are modelled by a non-parametric Gaussian process regression model. Current approaches to adaptive control based on Gaussian process models are severely limited in their practical applicability, because the model is re-adjusted using all the currently available data, which keeps growing with every time step. We propose the use of recursive Gaussian process regression algorithm for significant reduction in computational requirements, thus bringing the Gaussian process-based adaptive controllers closer to their practical applicability. In this work, we design a bi-criterial dual controller based on recursive Gaussian process model for discrete-time stochastic dynamic systems given in an affine-in-control form. Using Monte Carlo simulations, we show that the proposed controller achieves comparable performance with the full Gaussian process-based controller in terms of control quality while keeping the computational demands bounded. (paper)

  9. A Gaussian Approximation Potential for Silicon

    Science.gov (United States)

    Bernstein, Noam; Bartók, Albert; Kermode, James; Csányi, Gábor

    We present an interatomic potential for silicon using the Gaussian Approximation Potential (GAP) approach, which uses the Gaussian process regression method to approximate the reference potential energy surface as a sum of atomic energies. Each atomic energy is approximated as a function of the local environment around the atom, which is described with the smooth overlap of atomic environments (SOAP) descriptor. The potential is fit to a database of energies, forces, and stresses calculated using density functional theory (DFT) on a wide range of configurations from zero and finite temperature simulations. These include crystalline phases, liquid, amorphous, and low coordination structures, and diamond-structure point defects, dislocations, surfaces, and cracks. We compare the results of the potential to DFT calculations, as well as to previously published models including Stillinger-Weber, Tersoff, modified embedded atom method (MEAM), and ReaxFF. We show that it is very accurate as compared to the DFT reference results for a wide range of properties, including low energy bulk phases, liquid structure, as well as point, line, and plane defects in the diamond structure.

  10. Statistics of peaks of Gaussian random fields

    International Nuclear Information System (INIS)

    Bardeen, J.M.; Bond, J.R.; Kaiser, N.; Szalay, A.S.; Stanford Univ., CA; California Univ., Berkeley; Cambridge Univ., England; Fermi National Accelerator Lab., Batavia, IL)

    1986-01-01

    A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of upcrossing points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima. 67 references

  11. Overlay Spectrum Sharing using Improper Gaussian Signaling

    KAUST Repository

    Amin, Osama

    2016-11-30

    Improper Gaussian signaling (IGS) scheme has been recently shown to provide performance improvements in interference limited networks as opposed to the conventional proper Gaussian signaling (PGS) scheme. In this paper, we implement the IGS scheme in overlay cognitive radio system, where the secondary transmitter broadcasts a mixture of two different signals. The first signal is selected from the PGS scheme to match the primary message transmission. On the other hand, the second signal is chosen to be from the IGS scheme in order to reduce the interference effect on the primary receiver. We then optimally design the overlay cognitive radio to maximize the secondary link achievable rate while satisfying the primary network quality of service requirements. In particular, we consider full and partial channel knowledge scenarios and derive the feasibility conditions of operating the overlay cognitive radio systems. Moreover, we derive the superiority conditions of the IGS schemes over the PGS schemes supported with closed form expressions for the corresponding power distribution and the circularity coefficient and parameters. Simulation results are provided to support our theoretical derivations.

  12. Versatile Gaussian probes for squeezing estimation

    Science.gov (United States)

    Rigovacca, Luca; Farace, Alessandro; Souza, Leonardo A. M.; De Pasquale, Antonella; Giovannetti, Vittorio; Adesso, Gerardo

    2017-05-01

    We consider an instance of "black-box" quantum metrology in the Gaussian framework, where we aim to estimate the amount of squeezing applied on an input probe, without previous knowledge on the phase of the applied squeezing. By taking the quantum Fisher information (QFI) as the figure of merit, we evaluate its average and variance with respect to this phase in order to identify probe states that yield good precision for many different squeezing directions. We first consider the case of single-mode Gaussian probes with the same energy, and find that pure squeezed states maximize the average quantum Fisher information (AvQFI) at the cost of a performance that oscillates strongly as the squeezing direction is changed. Although the variance can be brought to zero by correlating the probing system with a reference mode, the maximum AvQFI cannot be increased in the same way. A different scenario opens if one takes into account the effects of photon losses: coherent states represent the optimal single-mode choice when losses exceed a certain threshold and, moreover, correlated probes can now yield larger AvQFI values than all single-mode states, on top of having zero variance.

  13. Finite Range Decomposition of Gaussian Processes

    CERN Document Server

    Brydges, C D; Mitter, P K

    2003-01-01

    Let $D$ be the finite difference Laplacian associated to the lattice $bZ^{d}$. For dimension $dge 3$, $age 0$ and $L$ a sufficiently large positive dyadic integer, we prove that the integral kernel of the resolvent $G^{a}:=(a-D)^{-1}$ can be decomposed as an infinite sum of positive semi-definite functions $ V_{n} $ of finite range, $ V_{n} (x-y) = 0$ for $|x-y|ge O(L)^{n}$. Equivalently, the Gaussian process on the lattice with covariance $G^{a}$ admits a decomposition into independent Gaussian processes with finite range covariances. For $a=0$, $ V_{n} $ has a limiting scaling form $L^{-n(d-2)}Gamma_{ c,ast }{bigl (frac{x-y}{ L^{n}}bigr )}$ as $nrightarrow infty$. As a corollary, such decompositions also exist for fractional powers $(-D)^{-alpha/2}$, $0

  14. Fast Convolutional Sparse Coding in the Dual Domain

    KAUST Repository

    Affara, Lama Ahmed

    2017-09-27

    Convolutional sparse coding (CSC) is an important building block of many computer vision applications ranging from image and video compression to deep learning. We present two contributions to the state of the art in CSC. First, we significantly speed up the computation by proposing a new optimization framework that tackles the problem in the dual domain. Second, we extend the original formulation to higher dimensions in order to process a wider range of inputs, such as color inputs, or HOG features. Our results show a significant speedup compared to the current state of the art in CSC.

  15. Phase transitions in glassy systems via convolutional neural networks

    Science.gov (United States)

    Fang, Chao

    Machine learning is a powerful approach commonplace in industry to tackle large data sets. Most recently, it has found its way into condensed matter physics, allowing for the first time the study of, e.g., topological phase transitions and strongly-correlated electron systems. The study of spin glasses is plagued by finite-size effects due to the long thermalization times needed. Here we use convolutional neural networks in an attempt to detect a phase transition in three-dimensional Ising spin glasses. Our results are compared to traditional approaches.

  16. Visualizing Vector Fields Using Line Integral Convolution and Dye Advection

    Science.gov (United States)

    Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu

    1996-01-01

    We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.

  17. Fast Convolutional Sparse Coding in the Dual Domain

    KAUST Repository

    Affara, Lama Ahmed; Ghanem, Bernard; Wonka, Peter

    2017-01-01

    Convolutional sparse coding (CSC) is an important building block of many computer vision applications ranging from image and video compression to deep learning. We present two contributions to the state of the art in CSC. First, we significantly speed up the computation by proposing a new optimization framework that tackles the problem in the dual domain. Second, we extend the original formulation to higher dimensions in order to process a wider range of inputs, such as color inputs, or HOG features. Our results show a significant speedup compared to the current state of the art in CSC.

  18. Salient regions detection using convolutional neural networks and color volume

    Science.gov (United States)

    Liu, Guang-Hai; Hou, Yingkun

    2018-03-01

    Convolutional neural network is an important technique in machine learning, pattern recognition and image processing. In order to reduce the computational burden and extend the classical LeNet-5 model to the field of saliency detection, we propose a simple and novel computing model based on LeNet-5 network. In the proposed model, hue, saturation and intensity are utilized to extract depth cues, and then we integrate depth cues and color volume to saliency detection following the basic structure of the feature integration theory. Experimental results show that the proposed computing model outperforms some existing state-of-the-art methods on MSRA1000 and ECSSD datasets.

  19. Traffic sign classification with dataset augmentation and convolutional neural network

    Science.gov (United States)

    Tang, Qing; Kurnianggoro, Laksono; Jo, Kang-Hyun

    2018-04-01

    This paper presents a method for traffic sign classification using a convolutional neural network (CNN). In this method, firstly we transfer a color image into grayscale, and then normalize it in the range (-1,1) as the preprocessing step. To increase robustness of classification model, we apply a dataset augmentation algorithm and create new images to train the model. To avoid overfitting, we utilize a dropout module before the last fully connection layer. To assess the performance of the proposed method, the German traffic sign recognition benchmark (GTSRB) dataset is utilized. Experimental results show that the method is effective in classifying traffic signs.

  20. Tandem mass spectrometry data quality assessment by self-convolution

    Directory of Open Access Journals (Sweden)

    Tham Wai

    2007-09-01

    Full Text Available Abstract Background Many algorithms have been developed for deciphering the tandem mass spectrometry (MS data sets. They can be essentially clustered into two classes. The first performs searches on theoretical mass spectrum database, while the second based itself on de novo sequencing from raw mass spectrometry data. It was noted that the quality of mass spectra affects significantly the protein identification processes in both instances. This prompted the authors to explore ways to measure the quality of MS data sets before subjecting them to the protein identification algorithms, thus allowing for more meaningful searches and increased confidence level of proteins identified. Results The proposed method measures the qualities of MS data sets based on the symmetric property of b- and y-ion peaks present in a MS spectrum. Self-convolution on MS data and its time-reversal copy was employed. Due to the symmetric nature of b-ions and y-ions peaks, the self-convolution result of a good spectrum would produce a highest mid point intensity peak. To reduce processing time, self-convolution was achieved using Fast Fourier Transform and its inverse transform, followed by the removal of the "DC" (Direct Current component and the normalisation of the data set. The quality score was defined as the ratio of the intensity at the mid point to the remaining peaks of the convolution result. The method was validated using both theoretical mass spectra, with various permutations, and several real MS data sets. The results were encouraging, revealing a high percentage of positive prediction rates for spectra with good quality scores. Conclusion We have demonstrated in this work a method for determining the quality of tandem MS data set. By pre-determining the quality of tandem MS data before subjecting them to protein identification algorithms, spurious protein predictions due to poor tandem MS data are avoided, giving scientists greater confidence in the

  1. The Use of Finite Fields and Rings to Compute Convolutions

    Science.gov (United States)

    1975-06-06

    showed in Ref. 1 that the convolution of two finite sequences of integers (a, ) and (b, ) for k = 1, 2, . . ., d can be obtained as the inverse transform of...since the T.’S are all distinct. Thus T~ exists and (7) can be solved as a = T A the inverse " transform .𔃻 Next let us impose on (7) the...the inverse transform d-1 Cn= (d) I Cka k=0 If an a can be found so that multiplications by powers of a are simple in hardware, the

  2. Tandem mass spectrometry data quality assessment by self-convolution.

    Science.gov (United States)

    Choo, Keng Wah; Tham, Wai Mun

    2007-09-20

    Many algorithms have been developed for deciphering the tandem mass spectrometry (MS) data sets. They can be essentially clustered into two classes. The first performs searches on theoretical mass spectrum database, while the second based itself on de novo sequencing from raw mass spectrometry data. It was noted that the quality of mass spectra affects significantly the protein identification processes in both instances. This prompted the authors to explore ways to measure the quality of MS data sets before subjecting them to the protein identification algorithms, thus allowing for more meaningful searches and increased confidence level of proteins identified. The proposed method measures the qualities of MS data sets based on the symmetric property of b- and y-ion peaks present in a MS spectrum. Self-convolution on MS data and its time-reversal copy was employed. Due to the symmetric nature of b-ions and y-ions peaks, the self-convolution result of a good spectrum would produce a highest mid point intensity peak. To reduce processing time, self-convolution was achieved using Fast Fourier Transform and its inverse transform, followed by the removal of the "DC" (Direct Current) component and the normalisation of the data set. The quality score was defined as the ratio of the intensity at the mid point to the remaining peaks of the convolution result. The method was validated using both theoretical mass spectra, with various permutations, and several real MS data sets. The results were encouraging, revealing a high percentage of positive prediction rates for spectra with good quality scores. We have demonstrated in this work a method for determining the quality of tandem MS data set. By pre-determining the quality of tandem MS data before subjecting them to protein identification algorithms, spurious protein predictions due to poor tandem MS data are avoided, giving scientists greater confidence in the predicted results. We conclude that the algorithm performs well

  3. Classifying medical relations in clinical text via convolutional neural networks.

    Science.gov (United States)

    He, Bin; Guan, Yi; Dai, Rui

    2018-05-16

    Deep learning research on relation classification has achieved solid performance in the general domain. This study proposes a convolutional neural network (CNN) architecture with a multi-pooling operation for medical relation classification on clinical records and explores a loss function with a category-level constraint matrix. Experiments using the 2010 i2b2/VA relation corpus demonstrate these models, which do not depend on any external features, outperform previous single-model methods and our best model is competitive with the existing ensemble-based method. Copyright © 2018. Published by Elsevier B.V.

  4. Weed Growth Stage Estimator Using Deep Convolutional Neural Networks

    DEFF Research Database (Denmark)

    Teimouri, Nima; Dyrmann, Mads; Nielsen, Per Rydahl

    2018-01-01

    This study outlines a new method of automatically estimating weed species and growth stages (from cotyledon until eight leaves are visible) of in situ images covering 18 weed species or families. Images of weeds growing within a variety of crops were gathered across variable environmental conditi...... in estimating the number of leaves and 96% accuracy when accepting a deviation of two leaves. These results show that this new method of using deep convolutional neural networks has a relatively high ability to estimate early growth stages across a wide variety of weed species....

  5. Diffusion weighted imaging in patients with rectal cancer: Comparison between Gaussian and non-Gaussian models.

    Directory of Open Access Journals (Sweden)

    Georgios C Manikis

    Full Text Available The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer.Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2 at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG and non-Gaussian (MNG and BNG were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE. To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC and F-ratio.All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area.No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior.

  6. Relative entropy as a measure of entanglement for Gaussian states

    Institute of Scientific and Technical Information of China (English)

    Lu Huai-Xin; Zhao Bo

    2006-01-01

    In this paper, we derive an explicit analytic expression of the relative entropy between two general Gaussian states. In the restriction of the set for Gaussian states and with the help of relative entropy formula and Peres-Simon separability criterion, one can conveniently obtain the relative entropy entanglement for Gaussian states. As an example,the relative entanglement for a two-mode squeezed thermal state has been obtained.

  7. Comparative proteomic analysis of kidney distal convoluted tubule and cortical collecting duct cells following long-term hormonal stimulation

    DEFF Research Database (Denmark)

    Wu, Qi; Moller, Hanne; Rosenbaek, Lena Lindtoft

    2017-01-01

    The distal convoluted tubule (DCT) and the cortical collecting ducts (CCD) are portions of renal tubule that are partly responsible for maintaining the systemic concentrations of potassium, sodium, calcium and magnesium. Despite being structurally similar, DCT and CCD cells have different transpo...... FDR threshold in one cell type plus the unique proteins in this cell type. These 1025 mpkDCT specific proteins and 1211 mpkCCD specific proteins under the three conditions were subjected to further bioinformatics analyses including Panther and DAVID gene ontology analyses, E3 ligase...

  8. Prediction and retrodiction with continuously monitored Gaussian states

    DEFF Research Database (Denmark)

    Zhang, Jinglei; Mølmer, Klaus

    2017-01-01

    Gaussian states of quantum oscillators are fully characterized by the mean values and the covariance matrix of their quadrature observables. We consider the dynamics of a system of oscillators subject to interactions, damping, and continuous probing which maintain their Gaussian state property......(t)$ to Gaussian states implies that the matrix $E(t)$ is also fully characterized by a vector of mean values and a covariance matrix. We derive the dynamical equations for these quantities and we illustrate their use in the retrodiction of measurements on Gaussian systems....

  9. Geometry of perturbed Gaussian states and quantum estimation

    International Nuclear Information System (INIS)

    Genoni, Marco G; Giorda, Paolo; Paris, Matteo G A

    2011-01-01

    We address the non-Gaussianity (nG) of states obtained by weakly perturbing a Gaussian state and investigate the relationships with quantum estimation. For classical perturbations, i.e. perturbations to eigenvalues, we found that the nG of the perturbed state may be written as the quantum Fisher information (QFI) distance minus a term depending on the infinitesimal energy change, i.e. it provides a lower bound to statistical distinguishability. Upon moving on isoenergetic surfaces in a neighbourhood of a Gaussian state, nG thus coincides with a proper distance in the Hilbert space and exactly quantifies the statistical distinguishability of the perturbations. On the other hand, for perturbations leaving the covariance matrix unperturbed, we show that nG provides an upper bound to the QFI. Our results show that the geometry of non-Gaussian states in the neighbourhood of a Gaussian state is definitely not trivial and cannot be subsumed by a differential structure. Nevertheless, the analysis of perturbations to a Gaussian state reveals that nG may be a resource for quantum estimation. The nG of specific families of perturbed Gaussian states is analysed in some detail with the aim of finding the maximally non-Gaussian state obtainable from a given Gaussian one. (fast track communication)

  10. Gaussian polynomials and content ideal in trivial extensions

    International Nuclear Information System (INIS)

    Bakkari, C.; Mahdou, N.

    2006-12-01

    The goal of this paper is to exhibit a class of Gaussian non-coherent rings R (with zero-divisors) such that wdim(R) = ∞ and fPdim(R) is always at most one and also exhibits a new class of rings (with zerodivisors) which are neither locally Noetherian nor locally domain where Gaussian polynomials have a locally principal content. For this purpose, we study the possible transfer of the 'Gaussian' property and the property 'the content ideal of a Gaussian polynomial is locally principal' to various trivial extension contexts. This article includes a brief discussion of the scopes and limits of our result. (author)

  11. Detecting the presence of a magnetic field under Gaussian and non-Gaussian noise by adaptive measurement

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yuan-Mei; Li, Jun-Gang, E-mail: jungl@bit.edu.cn; Zou, Jian

    2017-06-15

    Highlights: • Adaptive measurement strategy is used to detect the presence of a magnetic field. • Gaussian Ornstein–Uhlenbeck noise and non-Gaussian noise have been considered. • Weaker magnetic fields may be more easily detected than some stronger ones. - Abstract: By using the adaptive measurement method we study how to detect whether a weak magnetic field is actually present or not under Gaussian noise and non-Gaussian noise. We find that the adaptive measurement method can effectively improve the detection accuracy. For the case of Gaussian noise, we find the stronger the magnetic field strength, the easier for us to detect the magnetic field. Counterintuitively, for non-Gaussian noise, some weaker magnetic fields are more likely to be detected rather than some stronger ones. Finally, we give a reasonable physical interpretation.

  12. A MacWilliams Identity for Convolutional Codes: The General Case

    OpenAIRE

    Gluesing-Luerssen, Heide; Schneider, Gert

    2008-01-01

    A MacWilliams Identity for convolutional codes will be established. It makes use of the weight adjacency matrices of the code and its dual, based on state space realizations (the controller canonical form) of the codes in question. The MacWilliams Identity applies to various notions of duality appearing in the literature on convolutional coding theory.

  13. Isointense infant brain MRI segmentation with a dilated convolutional neural network

    NARCIS (Netherlands)

    Moeskops, P.; Pluim, J.P.W.

    2017-01-01

    Quantitative analysis of brain MRI at the age of 6 months is difficult because of the limited contrast between white matter and gray matter. In this study, we use a dilated triplanar convolutional neural network in combination with a non-dilated 3D convolutional neural network for the segmentation

  14. An upper bound on the number of errors corrected by a convolutional code

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2000-01-01

    The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length.......The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length....

  15. Using convolutional decoding to improve time delay and phase estimation in digital communications

    Science.gov (United States)

    Ormesher, Richard C [Albuquerque, NM; Mason, John J [Albuquerque, NM

    2010-01-26

    The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.

  16. Classifying images using restricted Boltzmann machines and convolutional neural networks

    Science.gov (United States)

    Zhao, Zhijun; Xu, Tongde; Dai, Chenyu

    2017-07-01

    To improve the feature recognition ability of deep model transfer learning, we propose a hybrid deep transfer learning method for image classification based on restricted Boltzmann machines (RBM) and convolutional neural networks (CNNs). It integrates learning abilities of two models, which conducts subject classification by exacting structural higher-order statistics features of images. While the method transfers the trained convolutional neural networks to the target datasets, fully-connected layers can be replaced by restricted Boltzmann machine layers; then the restricted Boltzmann machine layers and Softmax classifier are retrained, and BP neural network can be used to fine-tuned the hybrid model. The restricted Boltzmann machine layers has not only fully integrated the whole feature maps, but also learns the statistical features of target datasets in the view of the biggest logarithmic likelihood, thus removing the effects caused by the content differences between datasets. The experimental results show that the proposed method has improved the accuracy of image classification, outperforming other methods on Pascal VOC2007 and Caltech101 datasets.

  17. Cloud Detection by Fusing Multi-Scale Convolutional Features

    Science.gov (United States)

    Li, Zhiwei; Shen, Huanfeng; Wei, Yancong; Cheng, Qing; Yuan, Qiangqiang

    2018-04-01

    Clouds detection is an important pre-processing step for accurate application of optical satellite imagery. Recent studies indicate that deep learning achieves best performance in image segmentation tasks. Aiming at boosting the accuracy of cloud detection for multispectral imagery, especially for those that contain only visible and near infrared bands, in this paper, we proposed a deep learning based cloud detection method termed MSCN (multi-scale cloud net), which segments cloud by fusing multi-scale convolutional features. MSCN was trained on a global cloud cover validation collection, and was tested in more than ten types of optical images with different resolution. Experiment results show that MSCN has obvious advantages over the traditional multi-feature combined cloud detection method in accuracy, especially when in snow and other areas covered by bright non-cloud objects. Besides, MSCN produced more detailed cloud masks than the compared deep cloud detection convolution network. The effectiveness of MSCN make it promising for practical application in multiple kinds of optical imagery.

  18. Real-Time Video Convolutional Face Finder on Embedded Platforms

    Directory of Open Access Journals (Sweden)

    Mamalet Franck

    2007-01-01

    Full Text Available A high-level optimization methodology is applied for implementing the well-known convolutional face finder (CFF algorithm for real-time applications on mobile phones, such as teleconferencing, advanced user interfaces, image indexing, and security access control. CFF is based on a feature extraction and classification technique which consists of a pipeline of convolutions and subsampling operations. The design of embedded systems requires a good trade-off between performance and code size due to the limited amount of available resources. The followed methodology copes with the main drawbacks of the original implementation of CFF such as floating-point computation and memory allocation, in order to allow parallelism exploitation and perform algorithm optimizations. Experimental results show that our embedded face detection system can accurately locate faces with less computational load and memory cost. It runs on a 275 MHz Starcore DSP at 35 QCIF images/s with state-of-the-art detection rates and very low false alarm rates.

  19. Enhancing neutron beam production with a convoluted moderator

    Energy Technology Data Exchange (ETDEWEB)

    Iverson, E.B., E-mail: iversoneb@ornl.gov [Spallation Neutron Source, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Baxter, D.V. [Center for the Exploration of Energy and Matter, Indiana University, Bloomington, IN 47408 (United States); Muhrer, G. [Lujan Neutron Scattering Center, Los Alamos National Laboratory, P.O. Box 1663, Los Alamos, NM 87545 (United States); Ansell, S.; Dalgliesh, R. [ISIS Facility, Rutherford Appleton Laboratory, Chilton (United Kingdom); Gallmeier, F.X. [Spallation Neutron Source, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Kaiser, H. [Center for the Exploration of Energy and Matter, Indiana University, Bloomington, IN 47408 (United States); Lu, W. [Spallation Neutron Source, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States)

    2014-10-21

    We describe a new concept for a neutron moderating assembly resulting in the more efficient production of slow neutron beams. The Convoluted Moderator, a heterogeneous stack of interleaved moderating material and nearly transparent single-crystal spacers, is a directionally enhanced neutron beam source, improving beam emission over an angular range comparable to the range accepted by neutron beam lines and guides. We have demonstrated gains of 50% in slow neutron intensity for a given fast neutron production rate while simultaneously reducing the wavelength-dependent emission time dispersion by 25%, both coming from a geometric effect in which the neutron beam lines view a large surface area of moderating material in a relatively small volume. Additionally, we have confirmed a Bragg-enhancement effect arising from coherent scattering within the single-crystal spacers. We have not observed hypothesized refractive effects leading to additional gains at long wavelength. In addition to confirmation of the validity of the Convoluted Moderator concept, our measurements provide a series of benchmark experiments suitable for developing simulation and analysis techniques for practical optimization and eventual implementation at slow neutron source facilities.

  20. Photon beam convolution using polyenergetic energy deposition kernels

    International Nuclear Information System (INIS)

    Hoban, P.W.; Murray, D.C.; Round, W.H.

    1994-01-01

    In photon beam convolution calculations where polyenergetic energy deposition kernels (EDKs) are used, the primary photon energy spectrum should be correctly accounted for in Monte Carlo generation of EDKs. This requires the probability of interaction, determined by the linear attenuation coefficient, μ, to be taken into account when primary photon interactions are forced to occur at the EDK origin. The use of primary and scattered EDKs generated with a fixed photon spectrum can give rise to an error in the dose calculation due to neglecting the effects of beam hardening with depth. The proportion of primary photon energy that is transferred to secondary electrons increases with depth of interaction, due to the increase in the ratio μ ab /μ as the beam hardens. Convolution depth-dose curves calculated using polyenergetic EDKs generated for the primary photon spectra which exist at depths of 0, 20 and 40 cm in water, show a fall-off which is too steep when compared with EGS4 Monte Carlo results. A beam hardening correction factor applied to primary and scattered 0 cm EDKs, based on the ratio of kerma to terma at each depth, gives primary, scattered and total dose in good agreement with Monte Carlo results. (Author)

  1. Multi-Branch Fully Convolutional Network for Face Detection

    KAUST Repository

    Bai, Yancheng

    2017-07-20

    Face detection is a fundamental problem in computer vision. It is still a challenging task in unconstrained conditions due to significant variations in scale, pose, expressions, and occlusion. In this paper, we propose a multi-branch fully convolutional network (MB-FCN) for face detection, which considers both efficiency and effectiveness in the design process. Our MB-FCN detector can deal with faces at all scale ranges with only a single pass through the backbone network. As such, our MB-FCN model saves computation and thus is more efficient, compared to previous methods that make multiple passes. For each branch, the specific skip connections of the convolutional feature maps at different layers are exploited to represent faces in specific scale ranges. Specifically, small faces can be represented with both shallow fine-grained and deep powerful coarse features. With this representation, superior improvement in performance is registered for the task of detecting small faces. We test our MB-FCN detector on two public face detection benchmarks, including FDDB and WIDER FACE. Extensive experiments show that our detector outperforms state-of-the-art methods on all these datasets in general and by a substantial margin on the most challenging among them (e.g. WIDER FACE Hard subset). Also, MB-FCN runs at 15 FPS on a GPU for images of size 640 x 480 with no assumption on the minimum detectable face size.

  2. Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks

    Directory of Open Access Journals (Sweden)

    Haiyang Yu

    2017-06-01

    Full Text Available Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs, for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs and long short-term memory (LSTM neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction.

  3. Convolutional Neural Network for Histopathological Analysis of Osteosarcoma.

    Science.gov (United States)

    Mishra, Rashika; Daescu, Ovidiu; Leavey, Patrick; Rakheja, Dinesh; Sengupta, Anita

    2018-03-01

    Pathologists often deal with high complexity and sometimes disagreement over osteosarcoma tumor classification due to cellular heterogeneity in the dataset. Segmentation and classification of histology tissue in H&E stained tumor image datasets is a challenging task because of intra-class variations, inter-class similarity, crowded context, and noisy data. In recent years, deep learning approaches have led to encouraging results in breast cancer and prostate cancer analysis. In this article, we propose convolutional neural network (CNN) as a tool to improve efficiency and accuracy of osteosarcoma tumor classification into tumor classes (viable tumor, necrosis) versus nontumor. The proposed CNN architecture contains eight learned layers: three sets of stacked two convolutional layers interspersed with max pooling layers for feature extraction and two fully connected layers with data augmentation strategies to boost performance. The use of a neural network results in higher accuracy of average 92% for the classification. We compare the proposed architecture with three existing and proven CNN architectures for image classification: AlexNet, LeNet, and VGGNet. We also provide a pipeline to calculate percentage necrosis in a given whole slide image. We conclude that the use of neural networks can assure both high accuracy and efficiency in osteosarcoma classification.

  4. Multi-Input Convolutional Neural Network for Flower Grading

    Directory of Open Access Journals (Sweden)

    Yu Sun

    2017-01-01

    Full Text Available Flower grading is a significant task because it is extremely convenient for managing the flowers in greenhouse and market. With the development of computer vision, flower grading has become an interdisciplinary focus in both botany and computer vision. A new dataset named BjfuGloxinia contains three quality grades; each grade consists of 107 samples and 321 images. A multi-input convolutional neural network is designed for large scale flower grading. Multi-input CNN achieves a satisfactory accuracy of 89.6% on the BjfuGloxinia after data augmentation. Compared with a single-input CNN, the accuracy of multi-input CNN is increased by 5% on average, demonstrating that multi-input convolutional neural network is a promising model for flower grading. Although data augmentation contributes to the model, the accuracy is still limited by lack of samples diversity. Majority of misclassification is derived from the medium class. The image processing based bud detection is useful for reducing the misclassification, increasing the accuracy of flower grading to approximately 93.9%.

  5. Convolutional neural network architectures for predicting DNA–protein binding

    Science.gov (United States)

    Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.

    2016-01-01

    Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608

  6. Transforming Musical Signals through a Genre Classifying Convolutional Neural Network

    Science.gov (United States)

    Geng, S.; Ren, G.; Ogihara, M.

    2017-05-01

    Convolutional neural networks (CNNs) have been successfully applied on both discriminative and generative modeling for music-related tasks. For a particular task, the trained CNN contains information representing the decision making or the abstracting process. One can hope to manipulate existing music based on this 'informed' network and create music with new features corresponding to the knowledge obtained by the network. In this paper, we propose a method to utilize the stored information from a CNN trained on musical genre classification task. The network was composed of three convolutional layers, and was trained to classify five-second song clips into five different genres. After training, randomly selected clips were modified by maximizing the sum of outputs from the network layers. In addition to the potential of such CNNs to produce interesting audio transformation, more information about the network and the original music could be obtained from the analysis of the generated features since these features indicate how the network 'understands' the music.

  7. Siamese convolutional networks for tracking the spine motion

    Science.gov (United States)

    Liu, Yuan; Sui, Xiubao; Sun, Yicheng; Liu, Chengwei; Hu, Yong

    2017-09-01

    Deep learning models have demonstrated great success in various computer vision tasks such as image classification and object tracking. However, tracking the lumbar spine by digitalized video fluoroscopic imaging (DVFI), which can quantitatively analyze the motion mode of spine to diagnose lumbar instability, has not yet been well developed due to the lack of steady and robust tracking method. In this paper, we propose a novel visual tracking algorithm of the lumbar vertebra motion based on a Siamese convolutional neural network (CNN) model. We train a full-convolutional neural network offline to learn generic image features. The network is trained to learn a similarity function that compares the labeled target in the first frame with the candidate patches in the current frame. The similarity function returns a high score if the two images depict the same object. Once learned, the similarity function is used to track a previously unseen object without any adapting online. In the current frame, our tracker is performed by evaluating the candidate rotated patches sampled around the previous frame target position and presents a rotated bounding box to locate the predicted target precisely. Results indicate that the proposed tracking method can detect the lumbar vertebra steadily and robustly. Especially for images with low contrast and cluttered background, the presented tracker can still achieve good tracking performance. Further, the proposed algorithm operates at high speed for real time tracking.

  8. Real-Time Video Convolutional Face Finder on Embedded Platforms

    Directory of Open Access Journals (Sweden)

    Franck Mamalet

    2007-03-01

    Full Text Available A high-level optimization methodology is applied for implementing the well-known convolutional face finder (CFF algorithm for real-time applications on mobile phones, such as teleconferencing, advanced user interfaces, image indexing, and security access control. CFF is based on a feature extraction and classification technique which consists of a pipeline of convolutions and subsampling operations. The design of embedded systems requires a good trade-off between performance and code size due to the limited amount of available resources. The followed methodology copes with the main drawbacks of the original implementation of CFF such as floating-point computation and memory allocation, in order to allow parallelism exploitation and perform algorithm optimizations. Experimental results show that our embedded face detection system can accurately locate faces with less computational load and memory cost. It runs on a 275 MHz Starcore DSP at 35 QCIF images/s with state-of-the-art detection rates and very low false alarm rates.

  9. Digital image correlation based on a fast convolution strategy

    Science.gov (United States)

    Yuan, Yuan; Zhan, Qin; Xiong, Chunyang; Huang, Jianyong

    2017-10-01

    In recent years, the efficiency of digital image correlation (DIC) methods has attracted increasing attention because of its increasing importance for many engineering applications. Based on the classical affine optical flow (AOF) algorithm and the well-established inverse compositional Gauss-Newton algorithm, which is essentially a natural extension of the AOF algorithm under a nonlinear iterative framework, this paper develops a set of fast convolution-based DIC algorithms for high-efficiency subpixel image registration. Using a well-developed fast convolution technique, the set of algorithms establishes a series of global data tables (GDTs) over the digital images, which allows the reduction of the computational complexity of DIC significantly. Using the pre-calculated GDTs, the subpixel registration calculations can be implemented efficiently in a look-up-table fashion. Both numerical simulation and experimental verification indicate that the set of algorithms significantly enhances the computational efficiency of DIC, especially in the case of a dense data sampling for the digital images. Because the GDTs need to be computed only once, the algorithms are also suitable for efficiently coping with image sequences that record the time-varying dynamics of specimen deformations.

  10. Convolutional neural network features based change detection in satellite images

    Science.gov (United States)

    Mohammed El Amin, Arabi; Liu, Qingjie; Wang, Yunhong

    2016-07-01

    With the popular use of high resolution remote sensing (HRRS) satellite images, a huge research efforts have been placed on change detection (CD) problem. An effective feature selection method can significantly boost the final result. While hand-designed features have proven difficulties to design features that effectively capture high and mid-level representations, the recent developments in machine learning (Deep Learning) omit this problem by learning hierarchical representation in an unsupervised manner directly from data without human intervention. In this letter, we propose approaching the change detection problem from a feature learning perspective. A novel deep Convolutional Neural Networks (CNN) features based HR satellite images change detection method is proposed. The main guideline is to produce a change detection map directly from two images using a pretrained CNN. This method can omit the limited performance of hand-crafted features. Firstly, CNN features are extracted through different convolutional layers. Then, a concatenation step is evaluated after an normalization step, resulting in a unique higher dimensional feature map. Finally, a change map was computed using pixel-wise Euclidean distance. Our method has been validated on real bitemporal HRRS satellite images according to qualitative and quantitative analyses. The results obtained confirm the interest of the proposed method.

  11. Classification of stroke disease using convolutional neural network

    Science.gov (United States)

    Marbun, J. T.; Seniman; Andayani, U.

    2018-03-01

    Stroke is a condition that occurs when the blood supply stop flowing to the brain because of a blockage or a broken blood vessel. A symptoms that happen when experiencing stroke, some of them is a dropped consciousness, disrupted vision and paralyzed body. The general examination is being done to get a picture of the brain part that have stroke using Computerized Tomography (CT) Scan. The image produced from CT will be manually checked and need a proper lighting by doctor to get a type of stroke. That is why it needs a method to classify stroke from CT image automatically. A method proposed in this research is Convolutional Neural Network. CT image of the brain is used as the input for image processing. The stage before classification are image processing (Grayscaling, Scaling, Contrast Limited Adaptive Histogram Equalization, then the image being classified with Convolutional Neural Network. The result then showed that the method significantly conducted was able to be used as a tool to classify stroke disease in order to distinguish the type of stroke from CT image.

  12. Image Classification Based on Convolutional Denoising Sparse Autoencoder

    Directory of Open Access Journals (Sweden)

    Shuangshuang Chen

    2017-01-01

    Full Text Available Image classification aims to group images into corresponding semantic categories. Due to the difficulties of interclass similarity and intraclass variability, it is a challenging issue in computer vision. In this paper, an unsupervised feature learning approach called convolutional denoising sparse autoencoder (CDSAE is proposed based on the theory of visual attention mechanism and deep learning methods. Firstly, saliency detection method is utilized to get training samples for unsupervised feature learning. Next, these samples are sent to the denoising sparse autoencoder (DSAE, followed by convolutional layer and local contrast normalization layer. Generally, prior in a specific task is helpful for the task solution. Therefore, a new pooling strategy—spatial pyramid pooling (SPP fused with center-bias prior—is introduced into our approach. Experimental results on the common two image datasets (STL-10 and CIFAR-10 demonstrate that our approach is effective in image classification. They also demonstrate that none of these three components: local contrast normalization, SPP fused with center-prior, and l2 vector normalization can be excluded from our proposed approach. They jointly improve image representation and classification performance.

  13. Weed Growth Stage Estimator Using Deep Convolutional Neural Networks.

    Science.gov (United States)

    Teimouri, Nima; Dyrmann, Mads; Nielsen, Per Rydahl; Mathiassen, Solvejg Kopp; Somerville, Gayle J; Jørgensen, Rasmus Nyholm

    2018-05-16

    This study outlines a new method of automatically estimating weed species and growth stages (from cotyledon until eight leaves are visible) of in situ images covering 18 weed species or families. Images of weeds growing within a variety of crops were gathered across variable environmental conditions with regards to soil types, resolution and light settings. Then, 9649 of these images were used for training the computer, which automatically divided the weeds into nine growth classes. The performance of this proposed convolutional neural network approach was evaluated on a further set of 2516 images, which also varied in term of crop, soil type, image resolution and light conditions. The overall performance of this approach achieved a maximum accuracy of 78% for identifying Polygonum spp. and a minimum accuracy of 46% for blackgrass. In addition, it achieved an average 70% accuracy rate in estimating the number of leaves and 96% accuracy when accepting a deviation of two leaves. These results show that this new method of using deep convolutional neural networks has a relatively high ability to estimate early growth stages across a wide variety of weed species.

  14. Convolutional neural networks for vibrational spectroscopic data analysis.

    Science.gov (United States)

    Acquarelli, Jacopo; van Laarhoven, Twan; Gerretzen, Jan; Tran, Thanh N; Buydens, Lutgarde M C; Marchiori, Elena

    2017-02-15

    In this work we show that convolutional neural networks (CNNs) can be efficiently used to classify vibrational spectroscopic data and identify important spectral regions. CNNs are the current state-of-the-art in image classification and speech recognition and can learn interpretable representations of the data. These characteristics make CNNs a good candidate for reducing the need for preprocessing and for highlighting important spectral regions, both of which are crucial steps in the analysis of vibrational spectroscopic data. Chemometric analysis of vibrational spectroscopic data often relies on preprocessing methods involving baseline correction, scatter correction and noise removal, which are applied to the spectra prior to model building. Preprocessing is a critical step because even in simple problems using 'reasonable' preprocessing methods may decrease the performance of the final model. We develop a new CNN based method and provide an accompanying publicly available software. It is based on a simple CNN architecture with a single convolutional layer (a so-called shallow CNN). Our method outperforms standard classification algorithms used in chemometrics (e.g. PLS) in terms of accuracy when applied to non-preprocessed test data (86% average accuracy compared to the 62% achieved by PLS), and it achieves better performance even on preprocessed test data (96% average accuracy compared to the 89% achieved by PLS). For interpretability purposes, our method includes a procedure for finding important spectral regions, thereby facilitating qualitative interpretation of results. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. sEMG-Based Gesture Recognition with Convolution Neural Networks

    Directory of Open Access Journals (Sweden)

    Zhen Ding

    2018-06-01

    Full Text Available The traditional classification methods for limb motion recognition based on sEMG have been deeply researched and shown promising results. However, information loss during feature extraction reduces the recognition accuracy. To obtain higher accuracy, the deep learning method was introduced. In this paper, we propose a parallel multiple-scale convolution architecture. Compared with the state-of-art methods, the proposed architecture fully considers the characteristics of the sEMG signal. Larger sizes of kernel filter than commonly used in other CNN-based hand recognition methods are adopted. Meanwhile, the characteristics of the sEMG signal, that is, muscle independence, is considered when designing the architecture. All the classification methods were evaluated on the NinaPro database. The results show that the proposed architecture has the highest recognition accuracy. Furthermore, the results indicate that parallel multiple-scale convolution architecture with larger size of kernel filter and considering muscle independence can significantly increase the classification accuracy.

  16. Development of a morphological convolution operator for bearing fault detection

    Science.gov (United States)

    Li, Yifan; Liang, Xihui; Liu, Weiwei; Wang, Yan

    2018-05-01

    This paper presents a novel signal processing scheme, namely morphological convolution operator (MCO) lifted morphological undecimated wavelet (MUDW), for rolling element bearing fault detection. In this scheme, a MCO is first designed to fully utilize the advantage of the closing & opening gradient operator and the closing-opening & opening-closing gradient operator for feature extraction as well as the merit of excellent denoising characteristics of the convolution operator. The MCO is then introduced into MUDW for the purpose of improving the fault detection ability of the reported MUDWs. Experimental vibration signals collected from a train wheelset test rig and the bearing data center of Case Western Reserve University are employed to evaluate the effectiveness of the proposed MCO lifted MUDW on fault detection of rolling element bearings. The results show that the proposed approach has a superior performance in extracting fault features of defective rolling element bearings. In addition, comparisons are performed between two reported MUDWs and the proposed MCO lifted MUDW. The MCO lifted MUDW outperforms both of them in detection of outer race faults and inner race faults of rolling element bearings.

  17. Fluence-convolution broad-beam (FCBB) dose calculation

    Energy Technology Data Exchange (ETDEWEB)

    Lu Weiguo; Chen Mingli, E-mail: wlu@tomotherapy.co [TomoTherapy Inc., 1240 Deming Way, Madison, WI 53717 (United States)

    2010-12-07

    IMRT optimization requires a fast yet relatively accurate algorithm to calculate the iteration dose with small memory demand. In this paper, we present a dose calculation algorithm that approaches these goals. By decomposing the infinitesimal pencil beam (IPB) kernel into the central axis (CAX) component and lateral spread function (LSF) and taking the beam's eye view (BEV), we established a non-voxel and non-beamlet-based dose calculation formula. Both LSF and CAX are determined by a commissioning procedure using the collapsed-cone convolution/superposition (CCCS) method as the standard dose engine. The proposed dose calculation involves a 2D convolution of a fluence map with LSF followed by ray tracing based on the CAX lookup table with radiological distance and divergence correction, resulting in complexity of O(N{sup 3}) both spatially and temporally. This simple algorithm is orders of magnitude faster than the CCCS method. Without pre-calculation of beamlets, its implementation is also orders of magnitude smaller than the conventional voxel-based beamlet-superposition (VBS) approach. We compared the presented algorithm with the CCCS method using simulated and clinical cases. The agreement was generally within 3% for a homogeneous phantom and 5% for heterogeneous and clinical cases. Combined with the 'adaptive full dose correction', the algorithm is well suitable for calculating the iteration dose during IMRT optimization.

  18. Deep multi-scale convolutional neural network for hyperspectral image classification

    Science.gov (United States)

    Zhang, Feng-zhe; Yang, Xia

    2018-04-01

    In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.

  19. Object Detection Based on Fast/Faster RCNN Employing Fully Convolutional Architectures

    Directory of Open Access Journals (Sweden)

    Yun Ren

    2018-01-01

    Full Text Available Modern object detectors always include two major parts: a feature extractor and a feature classifier as same as traditional object detectors. The deeper and wider convolutional architectures are adopted as the feature extractor at present. However, many notable object detection systems such as Fast/Faster RCNN only consider simple fully connected layers as the feature classifier. In this paper, we declare that it is beneficial for the detection performance to elaboratively design deep convolutional networks (ConvNets of various depths for feature classification, especially using the fully convolutional architectures. In addition, this paper also demonstrates how to employ the fully convolutional architectures in the Fast/Faster RCNN. Experimental results show that a classifier based on convolutional layer is more effective for object detection than that based on fully connected layer and that the better detection performance can be achieved by employing deeper ConvNets as the feature classifier.

  20. White blood cells identification system based on convolutional deep neural learning networks.

    Science.gov (United States)

    Shahin, A I; Guo, Yanhui; Amin, K M; Sharawi, Amr A

    2017-11-16

    White blood cells (WBCs) differential counting yields valued information about human health and disease. The current developed automated cell morphology equipments perform differential count which is based on blood smear image analysis. Previous identification systems for WBCs consist of successive dependent stages; pre-processing, segmentation, feature extraction, feature selection, and classification. There is a real need to employ deep learning methodologies so that the performance of previous WBCs identification systems can be increased. Classifying small limited datasets through deep learning systems is a major challenge and should be investigated. In this paper, we propose a novel identification system for WBCs based on deep convolutional neural networks. Two methodologies based on transfer learning are followed: transfer learning based on deep activation features and fine-tuning of existed deep networks. Deep acrivation featues are extracted from several pre-trained networks and employed in a traditional identification system. Moreover, a novel end-to-end convolutional deep architecture called "WBCsNet" is proposed and built from scratch. Finally, a limited balanced WBCs dataset classification is performed through the WBCsNet as a pre-trained network. During our experiments, three different public WBCs datasets (2551 images) have been used which contain 5 healthy WBCs types. The overall system accuracy achieved by the proposed WBCsNet is (96.1%) which is more than different transfer learning approaches or even the previous traditional identification system. We also present features visualization for the WBCsNet activation which reflects higher response than the pre-trained activated one. a novel WBCs identification system based on deep learning theory is proposed and a high performance WBCsNet can be employed as a pre-trained network. Copyright © 2017. Published by Elsevier B.V.

  1. Gaussian likelihood inference on data from trans-Gaussian random fields with Matérn covariance function

    KAUST Repository

    Yan, Yuan

    2017-07-13

    Gaussian likelihood inference has been studied and used extensively in both statistical theory and applications due to its simplicity. However, in practice, the assumption of Gaussianity is rarely met in the analysis of spatial data. In this paper, we study the effect of non-Gaussianity on Gaussian likelihood inference for the parameters of the Matérn covariance model. By using Monte Carlo simulations, we generate spatial data from a Tukey g-and-h random field, a flexible trans-Gaussian random field, with the Matérn covariance function, where g controls skewness and h controls tail heaviness. We use maximum likelihood based on the multivariate Gaussian distribution to estimate the parameters of the Matérn covariance function. We illustrate the effects of non-Gaussianity of the data on the estimated covariance function by means of functional boxplots. Thanks to our tailored simulation design, a comparison of the maximum likelihood estimator under both the increasing and fixed domain asymptotics for spatial data is performed. We find that the maximum likelihood estimator based on Gaussian likelihood is overall satisfying and preferable than the non-distribution-based weighted least squares estimator for data from the Tukey g-and-h random field. We also present the result for Gaussian kriging based on Matérn covariance estimates with data from the Tukey g-and-h random field and observe an overall satisfactory performance.

  2. Gaussian likelihood inference on data from trans-Gaussian random fields with Matérn covariance function

    KAUST Repository

    Yan, Yuan; Genton, Marc G.

    2017-01-01

    Gaussian likelihood inference has been studied and used extensively in both statistical theory and applications due to its simplicity. However, in practice, the assumption of Gaussianity is rarely met in the analysis of spatial data. In this paper, we study the effect of non-Gaussianity on Gaussian likelihood inference for the parameters of the Matérn covariance model. By using Monte Carlo simulations, we generate spatial data from a Tukey g-and-h random field, a flexible trans-Gaussian random field, with the Matérn covariance function, where g controls skewness and h controls tail heaviness. We use maximum likelihood based on the multivariate Gaussian distribution to estimate the parameters of the Matérn covariance function. We illustrate the effects of non-Gaussianity of the data on the estimated covariance function by means of functional boxplots. Thanks to our tailored simulation design, a comparison of the maximum likelihood estimator under both the increasing and fixed domain asymptotics for spatial data is performed. We find that the maximum likelihood estimator based on Gaussian likelihood is overall satisfying and preferable than the non-distribution-based weighted least squares estimator for data from the Tukey g-and-h random field. We also present the result for Gaussian kriging based on Matérn covariance estimates with data from the Tukey g-and-h random field and observe an overall satisfactory performance.

  3. IBS for non-gaussian distributions

    International Nuclear Information System (INIS)

    Fedotov, A.; Sidorin, A.O.; Smirnov, A.V.

    2010-01-01

    In many situations distribution can significantly deviate from Gaussian which requires accurate treatment of IBS. Our original interest in this problem was motivated by the need to have an accurate description of beam evolution due to IBS while distribution is strongly affected by the external electron cooling force. A variety of models with various degrees of approximation were developed and implemented in BETACOOL in the past to address this topic. A more complete treatment based on the friction coefficient and full 3-D diffusion tensor was introduced in BETACOOL at the end of 2007 under the name 'local IBS model'. Such a model allowed us calculation of IBS for an arbitrary beam distribution. The numerical benchmarking of this local IBS algorithm and its comparison with other models was reported before. In this paper, after briefly describing the model and its limitations, they present its comparison with available experimental data.

  4. Optical vortex scanning inside the Gaussian beam

    International Nuclear Information System (INIS)

    Masajada, J; Leniec, M; Augustyniak, I

    2011-01-01

    We discussed a new scanning method for optical vortex-based scanning microscopy. The optical vortex is introduced into the incident Gaussian beam by a vortex lens. Then the beam with the optical vortex is focused by an objective and illuminates the sample. By changing the position of the vortex lens we can shift the optical vortex position at the sample plane. By adjusting system parameters we can get 30 times smaller shift at the sample plane compared to the vortex lens shift. Moreover, if the range of vortex shifts is smaller than 3% of the beam radius in the sample plane the amplitude and phase distribution around the phase dislocation remains practically unchanged. Thus we can scan the sample topography precisely with an optical vortex

  5. White Gaussian Noise - Models for Engineers

    Science.gov (United States)

    Jondral, Friedrich K.

    2018-04-01

    This paper assembles some information about white Gaussian noise (WGN) and its applications. It starts from a description of thermal noise, i. e. the irregular motion of free charge carriers in electronic devices. In a second step, mathematical models of WGN processes and their most important parameters, especially autocorrelation functions and power spectrum densities, are introduced. In order to proceed from mathematical models to simulations, we discuss the generation of normally distributed random numbers. The signal-to-noise ratio as the most important quality measure used in communications, control or measurement technology is accurately introduced. As a practical application of WGN, the transmission of quadrature amplitude modulated (QAM) signals over additive WGN channels together with the optimum maximum likelihood (ML) detector is considered in a demonstrative and intuitive way.

  6. Gaussian process regression for geometry optimization

    Science.gov (United States)

    Denzel, Alexander; Kästner, Johannes

    2018-03-01

    We implemented a geometry optimizer based on Gaussian process regression (GPR) to find minimum structures on potential energy surfaces. We tested both a two times differentiable form of the Matérn kernel and the squared exponential kernel. The Matérn kernel performs much better. We give a detailed description of the optimization procedures. These include overshooting the step resulting from GPR in order to obtain a higher degree of interpolation vs. extrapolation. In a benchmark against the Limited-memory Broyden-Fletcher-Goldfarb-Shanno optimizer of the DL-FIND library on 26 test systems, we found the new optimizer to generally reduce the number of required optimization steps.

  7. Gaussian elimination is not optimal, revisited

    DEFF Research Database (Denmark)

    Macedo, Hugo Daniel

    2016-01-01

    We refactor the universal law for the tensor product to express matrix multiplication as the product . MN of two matrices . M and . N thus making possible to use such matrix product to encode and transform algorithms performing matrix multiplication using techniques from linear algebra. We explore...... the end results are equations involving matrix products, our exposition builds upon previous works on the category of matrices (and the related category of finite vector spaces) which we extend by showing: why the direct sum . (⊕,0) monoid is not closed, a biproduct encoding of Gaussian elimination...... such possibility and show two stepwise refinements transforming the composition . MN into the Naïve and Strassen's matrix multiplication algorithms. The inspection of the stepwise transformation of the composition of matrices . MN into the Naïve matrix multiplication algorithm evidences that the steps...

  8. Tunnelling through a Gaussian random barrier

    International Nuclear Information System (INIS)

    Bezak, Viktor

    2008-01-01

    A thorough analysis of the tunnelling of electrons through a laterally inhomogeneous rectangular barrier is presented. The barrier height is defined as a statistically homogeneous Gaussian random function. In order to simplify calculations, we assume that the electron energy is low enough in comparison with the mean value of the barrier height. The randomness of the barrier height is defined vertically by a constant variance and horizontally by a finite correlation length. We present detailed calculations of the angular probability density for the tunnelled electrons (i.e. for the scattering forwards). The tunnelling manifests a remarkably diffusive character if the wavelength of the electrons is comparable with the correlation length of the barrier

  9. Gaussian process regression for tool wear prediction

    Science.gov (United States)

    Kong, Dongdong; Chen, Yongjie; Li, Ning

    2018-05-01

    To realize and accelerate the pace of intelligent manufacturing, this paper presents a novel tool wear assessment technique based on the integrated radial basis function based kernel principal component analysis (KPCA_IRBF) and Gaussian process regression (GPR) for real-timely and accurately monitoring the in-process tool wear parameters (flank wear width). The KPCA_IRBF is a kind of new nonlinear dimension-increment technique and firstly proposed for feature fusion. The tool wear predictive value and the corresponding confidence interval are both provided by utilizing the GPR model. Besides, GPR performs better than artificial neural networks (ANN) and support vector machines (SVM) in prediction accuracy since the Gaussian noises can be modeled quantitatively in the GPR model. However, the existence of noises will affect the stability of the confidence interval seriously. In this work, the proposed KPCA_IRBF technique helps to remove the noises and weaken its negative effects so as to make the confidence interval compressed greatly and more smoothed, which is conducive for monitoring the tool wear accurately. Moreover, the selection of kernel parameter in KPCA_IRBF can be easily carried out in a much larger selectable region in comparison with the conventional KPCA_RBF technique, which helps to improve the efficiency of model construction. Ten sets of cutting tests are conducted to validate the effectiveness of the presented tool wear assessment technique. The experimental results show that the in-process flank wear width of tool inserts can be monitored accurately by utilizing the presented tool wear assessment technique which is robust under a variety of cutting conditions. This study lays the foundation for tool wear monitoring in real industrial settings.

  10. Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry

    Science.gov (United States)

    Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.

    2015-01-01

    Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.

  11. Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry

    Science.gov (United States)

    Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.

    2014-01-01

    Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.

  12. Mass detection in digital breast tomosynthesis: Deep convolutional neural network with transfer learning from mammography.

    Science.gov (United States)

    Samala, Ravi K; Chan, Heang-Ping; Hadjiiski, Lubomir; Helvie, Mark A; Wei, Jun; Cha, Kenny

    2016-12-01

    Develop a computer-aided detection (CAD) system for masses in digital breast tomosynthesis (DBT) volume using a deep convolutional neural network (DCNN) with transfer learning from mammograms. A data set containing 2282 digitized film and digital mammograms and 324 DBT volumes were collected with IRB approval. The mass of interest on the images was marked by an experienced breast radiologist as reference standard. The data set was partitioned into a training set (2282 mammograms with 2461 masses and 230 DBT views with 228 masses) and an independent test set (94 DBT views with 89 masses). For DCNN training, the region of interest (ROI) containing the mass (true positive) was extracted from each image. False positive (FP) ROIs were identified at prescreening by their previously developed CAD systems. After data augmentation, a total of 45 072 mammographic ROIs and 37 450 DBT ROIs were obtained. Data normalization and reduction of non-uniformity in the ROIs across heterogeneous data was achieved using a background correction method applied to each ROI. A DCNN with four convolutional layers and three fully connected (FC) layers was first trained on the mammography data. Jittering and dropout techniques were used to reduce overfitting. After training with the mammographic ROIs, all weights in the first three convolutional layers were frozen, and only the last convolution layer and the FC layers were randomly initialized again and trained using the DBT training ROIs. The authors compared the performances of two CAD systems for mass detection in DBT: one used the DCNN-based approach and the other used their previously developed feature-based approach for FP reduction. The prescreening stage was identical in both systems, passing the same set of mass candidates to the FP reduction stage. For the feature-based CAD system, 3D clustering and active contour method was used for segmentation; morphological, gray level, and texture features were extracted and merged with a

  13. Higher-Order Hybrid Gaussian Kernel in Meshsize Boosting Algorithm

    African Journals Online (AJOL)

    In this paper, we shall use higher-order hybrid Gaussian kernel in a meshsize boosting algorithm in kernel density estimation. Bias reduction is guaranteed in this scheme like other existing schemes but uses the higher-order hybrid Gaussian kernel instead of the regular fixed kernels. A numerical verification of this scheme ...

  14. Convergence of posteriors for discretized log Gaussian Cox processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge

    2004-01-01

    In Markov chain Monte Carlo posterior computation for log Gaussian Cox processes (LGCPs) a discretization of the continuously indexed Gaussian field is required. It is demonstrated that approximate posterior expectations computed from discretized LGCPs converge to the exact posterior expectations...... when the cell sizes of the discretization tends to zero. The effect of discretization is studied in a data example....

  15. Comparing Fixed and Variable-Width Gaussian Networks

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra; Kainen, P.C.

    2014-01-01

    Roč. 57, September (2014), s. 23-28 ISSN 0893-6080 R&D Projects: GA MŠk(CZ) LD13002 Institutional support: RVO:67985807 Keywords : Gaussian radial and kernel networks * Functionally equivalent networks * Universal approximators * Stabilizers defined by Gaussian kernels * Argminima of error functionals Subject RIV: IN - Informatics, Computer Science Impact factor: 2.708, year: 2014

  16. Two-photon optics of Bessel-Gaussian modes

    CSIR Research Space (South Africa)

    McLaren, M

    2013-09-01

    Full Text Available In this paper we consider geometrical two-photon optics of Bessel-Gaussian modes generated in spontaneous parametric down-conversion of a Gaussian pump beam. We provide a general theoretical expression for the orbital angular momentum (OAM) spectrum...

  17. Application Of Shared Gamma And Inverse-Gaussian Frailty Models ...

    African Journals Online (AJOL)

    Shared Gamma and Inverse-Gaussian Frailty models are used to analyze the survival times of patients who are clustered according to cancer/tumor types under Parametric Proportional Hazard framework. The result of the ... However, no evidence is strong enough for preference of either Gamma or Inverse Gaussian Frailty.

  18. Optimality of Gaussian attacks in continuous-variable quantum cryptography.

    Science.gov (United States)

    Navascués, Miguel; Grosshans, Frédéric; Acín, Antonio

    2006-11-10

    We analyze the asymptotic security of the family of Gaussian modulated quantum key distribution protocols for continuous-variables systems. We prove that the Gaussian unitary attack is optimal for all the considered bounds on the key rate when the first and second momenta of the canonical variables involved are known by the honest parties.

  19. Degeneracy of energy levels of pseudo-Gaussian oscillators

    International Nuclear Information System (INIS)

    Iacob, Theodor-Felix; Iacob, Felix; Lute, Marina

    2015-01-01

    We study the main features of the isotropic radial pseudo-Gaussian oscillators spectral properties. This study is made upon the energy levels degeneracy with respect to orbital angular momentum quantum number. In a previous work [6] we have shown that the pseudo-Gaussian oscillators belong to the class of quasi-exactly solvable models and an exact solution has been found

  20. Convolution Algebra for Fluid Modes with Finite Energy

    Science.gov (United States)

    1992-04-01

    signals and systems analysis: the evaluation of the initial condition -or input- to a system given its final condition -or output- and its impulse ...Images Corrupted with Gaussian Blur .............. 30 III.. 5 Deblurring with Hermite-Rodriguez Wavelets 34 5.1 Introduction...66 25. Letter "T", which is diffused for t=12, and corrupted by additive noise at SNR’s = 1

  1. Ultrawide Bandwidth Receiver Based on a Multivariate Generalized Gaussian Distribution

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2015-04-01

    Multivariate generalized Gaussian density (MGGD) is used to approximate the multiple access interference (MAI) and additive white Gaussian noise in pulse-based ultrawide bandwidth (UWB) system. The MGGD probability density function (pdf) is shown to be a better approximation of a UWB system as compared to multivariate Gaussian, multivariate Laplacian and multivariate Gaussian-Laplacian mixture (GLM). The similarity between the simulated and the approximated pdf is measured with the help of modified Kullback-Leibler distance (KLD). It is also shown that MGGD has the smallest KLD as compared to Gaussian, Laplacian and GLM densities. A receiver based on the principles of minimum bit error rate is designed for the MGGD pdf. As the requirement is stringent, the adaptive implementation of the receiver is also carried out in this paper. Training sequence of the desired user is the only requirement when implementing the detector adaptively. © 2002-2012 IEEE.

  2. Gaussian cloning of coherent states with known phases

    International Nuclear Information System (INIS)

    Alexanian, Moorad

    2006-01-01

    The fidelity for cloning coherent states is improved over that provided by optimal Gaussian and non-Gaussian cloners for the subset of coherent states that are prepared with known phases. Gaussian quantum cloning duplicates all coherent states with an optimal fidelity of 2/3. Non-Gaussian cloners give optimal single-clone fidelity for a symmetric 1-to-2 cloner of 0.6826. Coherent states that have known phases can be cloned with a fidelity of 4/5. The latter is realized by a combination of two beam splitters and a four-wave mixer operated in the nonlinear regime, all of which are realized by interaction Hamiltonians that are quadratic in the photon operators. Therefore, the known Gaussian devices for cloning coherent states are extended when cloning coherent states with known phases by considering a nonbalanced beam splitter at the input side of the amplifier

  3. Training strategy for convolutional neural networks in pedestrian gender classification

    Science.gov (United States)

    Ng, Choon-Boon; Tay, Yong-Haur; Goi, Bok-Min

    2017-06-01

    In this work, we studied a strategy for training a convolutional neural network in pedestrian gender classification with limited amount of labeled training data. Unsupervised learning by k-means clustering on pedestrian images was used to learn the filters to initialize the first layer of the network. As a form of pre-training, supervised learning for the related task of pedestrian classification was performed. Finally, the network was fine-tuned for gender classification. We found that this strategy improved the network's generalization ability in gender classification, achieving better test results when compared to random weights initialization and slightly more beneficial than merely initializing the first layer filters by unsupervised learning. This shows that unsupervised learning followed by pre-training with pedestrian images is an effective strategy to learn useful features for pedestrian gender classification.

  4. Accurate lithography simulation model based on convolutional neural networks

    Science.gov (United States)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  5. An effective convolutional neural network model for Chinese sentiment analysis

    Science.gov (United States)

    Zhang, Yu; Chen, Mengdong; Liu, Lianzhong; Wang, Yadong

    2017-06-01

    Nowadays microblog is getting more and more popular. People are increasingly accustomed to expressing their opinions on Twitter, Facebook and Sina Weibo. Sentiment analysis of microblog has received significant attention, both in academia and in industry. So far, Chinese microblog exploration still needs lots of further work. In recent years CNN has also been used to deal with NLP tasks, and already achieved good results. However, these methods ignore the effective use of a large number of existing sentimental resources. For this purpose, we propose a Lexicon-based Sentiment Convolutional Neural Networks (LSCNN) model focus on Weibo's sentiment analysis, which combines two CNNs, trained individually base on sentiment features and word embedding, at the fully connected hidden layer. The experimental results show that our model outperforms the CNN model only with word embedding features on microblog sentiment analysis task.

  6. High Order Tensor Formulation for Convolutional Sparse Coding

    KAUST Repository

    Bibi, Adel Aamer

    2017-12-25

    Convolutional sparse coding (CSC) has gained attention for its successful role as a reconstruction and a classification tool in the computer vision and machine learning community. Current CSC methods can only reconstruct singlefeature 2D images independently. However, learning multidimensional dictionaries and sparse codes for the reconstruction of multi-dimensional data is very important, as it examines correlations among all the data jointly. This provides more capacity for the learned dictionaries to better reconstruct data. In this paper, we propose a generic and novel formulation for the CSC problem that can handle an arbitrary order tensor of data. Backed with experimental results, our proposed formulation can not only tackle applications that are not possible with standard CSC solvers, including colored video reconstruction (5D- tensors), but it also performs favorably in reconstruction with much fewer parameters as compared to naive extensions of standard CSC to multiple features/channels.

  7. Classification of decays involving variable decay chains with convolutional architectures

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Vidyo contribution We present a technique to perform classification of decays that exhibit decay chains involving a variable number of particles, which include a broad class of $B$ meson decays sensitive to new physics. The utility of such decays as a probe of the Standard Model is dependent upon accurate determination of the decay rate, which is challenged by the combinatorial background arising in high-multiplicity decay modes. In our model, each particle in the decay event is represented as a fixed-dimensional vector of feature attributes, forming an $n \\times k$ representation of the event, where $n$ is the number of particles in the event and $k$ is the dimensionality of the feature vector. A convolutional architecture is used to capture dependencies between the embedded particle representations and perform the final classification. The proposed model performs outperforms standard machine learning approaches based on Monte Carlo studies across a range of variable final-state decays with the Belle II det...

  8. CONEDEP: COnvolutional Neural network based Earthquake DEtection and Phase Picking

    Science.gov (United States)

    Zhou, Y.; Huang, Y.; Yue, H.; Zhou, S.; An, S.; Yun, N.

    2017-12-01

    We developed an automatic local earthquake detection and phase picking algorithm based on Fully Convolutional Neural network (FCN). The FCN algorithm detects and segments certain features (phases) in 3 component seismograms to realize efficient picking. We use STA/LTA algorithm and template matching algorithm to construct the training set from seismograms recorded 1 month before and after the Wenchuan earthquake. Precise P and S phases are identified and labeled to construct the training set. Noise data are produced by combining back-ground noise and artificial synthetic noise to form the equivalent scale of noise set as the signal set. Training is performed on GPUs to achieve efficient convergence. Our algorithm has significantly improved performance in terms of the detection rate and precision in comparison with STA/LTA and template matching algorithms.

  9. Computational optical tomography using 3-D deep convolutional neural networks

    Science.gov (United States)

    Nguyen, Thanh; Bui, Vy; Nehmetallah, George

    2018-04-01

    Deep convolutional neural networks (DCNNs) offer a promising performance for many image processing areas, such as super-resolution, deconvolution, image classification, denoising, and segmentation, with outstanding results. Here, we develop for the first time, to our knowledge, a method to perform 3-D computational optical tomography using 3-D DCNN. A simulated 3-D phantom dataset was first constructed and converted to a dataset of phase objects imaged on a spatial light modulator. For each phase image in the dataset, the corresponding diffracted intensity image was experimentally recorded on a CCD. We then experimentally demonstrate the ability of the developed 3-D DCNN algorithm to solve the inverse problem by reconstructing the 3-D index of refraction distributions of test phantoms from the dataset from their corresponding diffraction patterns.

  10. Drug-Drug Interaction Extraction via Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Shengyu Liu

    2016-01-01

    Full Text Available Drug-drug interaction (DDI extraction as a typical relation extraction task in natural language processing (NLP has always attracted great attention. Most state-of-the-art DDI extraction systems are based on support vector machines (SVM with a large number of manually defined features. Recently, convolutional neural networks (CNN, a robust machine learning method which almost does not need manually defined features, has exhibited great potential for many NLP tasks. It is worth employing CNN for DDI extraction, which has never been investigated. We proposed a CNN-based method for DDI extraction. Experiments conducted on the 2013 DDIExtraction challenge corpus demonstrate that CNN is a good choice for DDI extraction. The CNN-based DDI extraction method achieves an F-score of 69.75%, which outperforms the existing best performing method by 2.75%.

  11. Truncation Depth Rule-of-Thumb for Convolutional Codes

    Science.gov (United States)

    Moision, Bruce

    2009-01-01

    In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.

  12. Finding Neutrinos in LArTPCs using Convolutional Neural Networks

    Science.gov (United States)

    Wongjirad, Taritree

    2017-09-01

    Deep learning algorithms, which have emerged over the last decade, are opening up new ways to analyze data for many particle physics experiments. MicroBooNE, which is a neutrino experiment at Fermilab, has been exploring the use of such algorithms, in particular, convolutional neural networks (CNNS). CNNs are the state-of-the-art method for a large class of problems involving the analysis of images. This makes CNNs an attractive approach for MicroBooNE, whose detector, a liquid argon time projection chamber (LArTPC), produces high-resolution images of particle interactions. In this talk, I will discuss the ways CNNs can be applied to tasks like neutrino interaction detection and particle identification in MicroBooNE and LArTPCs.

  13. Radio frequency interference mitigation using deep convolutional neural networks

    Science.gov (United States)

    Akeret, J.; Chang, C.; Lucchi, A.; Refregier, A.

    2017-01-01

    We propose a novel approach for mitigating radio frequency interference (RFI) signals in radio data using the latest advances in deep learning. We employ a special type of Convolutional Neural Network, the U-Net, that enables the classification of clean signal and RFI signatures in 2D time-ordered data acquired from a radio telescope. We train and assess the performance of this network using the HIDE &SEEK radio data simulation and processing packages, as well as early Science Verification data acquired with the 7m single-dish telescope at the Bleien Observatory. We find that our U-Net implementation is showing competitive accuracy to classical RFI mitigation algorithms such as SEEK's SUMTHRESHOLD implementation. We publish our U-Net software package on GitHub under GPLv3 license.

  14. Forecasting Flare Activity Using Deep Convolutional Neural Networks

    Science.gov (United States)

    Hernandez, T.

    2017-12-01

    Current operational flare forecasting relies on human morphological analysis of active regions and the persistence of solar flare activity through time (i.e. that the Sun will continue to do what it is doing right now: flaring or remaining calm). In this talk we present the results of applying deep Convolutional Neural Networks (CNNs) to the problem of solar flare forecasting. CNNs operate by training a set of tunable spatial filters that, in combination with neural layer interconnectivity, allow CNNs to automatically identify significant spatial structures predictive for classification and regression problems. We will start by discussing the applicability and success rate of the approach, the advantages it has over non-automated forecasts, and how mining our trained neural network provides a fresh look into the mechanisms behind magnetic energy storage and release.

  15. Convolutional neural networks with balanced batches for facial expressions recognition

    Science.gov (United States)

    Battini Sönmez, Elena; Cangelosi, Angelo

    2017-03-01

    This paper considers the issue of fully automatic emotion classification on 2D faces. In spite of the great effort done in recent years, traditional machine learning approaches based on hand-crafted feature extraction followed by the classification stage failed to develop a real-time automatic facial expression recognition system. The proposed architecture uses Convolutional Neural Networks (CNN), which are built as a collection of interconnected processing elements to simulate the brain of human beings. The basic idea of CNNs is to learn a hierarchical representation of the input data, which results in a better classification performance. In this work we present a block-based CNN algorithm, which uses noise, as data augmentation technique, and builds batches with a balanced number of samples per class. The proposed architecture is a very simple yet powerful CNN, which can yield state-of-the-art accuracy on the very competitive benchmark algorithm of the Extended Cohn Kanade database.

  16. Network Intrusion Detection through Stacking Dilated Convolutional Autoencoders

    Directory of Open Access Journals (Sweden)

    Yang Yu

    2017-01-01

    Full Text Available Network intrusion detection is one of the most important parts for cyber security to protect computer systems against malicious attacks. With the emergence of numerous sophisticated and new attacks, however, network intrusion detection techniques are facing several significant challenges. The overall objective of this study is to learn useful feature representations automatically and efficiently from large amounts of unlabeled raw network traffic data by using deep learning approaches. We propose a novel network intrusion model by stacking dilated convolutional autoencoders and evaluate our method on two new intrusion detection datasets. Several experiments were carried out to check the effectiveness of our approach. The comparative experimental results demonstrate that the proposed model can achieve considerably high performance which meets the demand of high accuracy and adaptability of network intrusion detection systems (NIDSs. It is quite potential and promising to apply our model in the large-scale and real-world network environments.

  17. Fully Convolutional Network Based Shadow Extraction from GF-2 Imagery

    Science.gov (United States)

    Li, Z.; Cai, G.; Ren, H.

    2018-04-01

    There are many shadows on the high spatial resolution satellite images, especially in the urban areas. Although shadows on imagery severely affect the information extraction of land cover or land use, they provide auxiliary information for building extraction which is hard to achieve a satisfactory accuracy through image classification itself. This paper focused on the method of building shadow extraction by designing a fully convolutional network and training samples collected from GF-2 satellite imagery in the urban region of Changchun city. By means of spatial filtering and calculation of adjacent relationship along the sunlight direction, the small patches from vegetation or bridges have been eliminated from the preliminary extracted shadows. Finally, the building shadows were separated. The extracted building shadow information from the proposed method in this paper was compared with the results from the traditional object-oriented supervised classification algorihtms. It showed that the deep learning network approach can improve the accuracy to a large extent.

  18. Finger vein recognition based on convolutional neural network

    Directory of Open Access Journals (Sweden)

    Meng Gesi

    2017-01-01

    Full Text Available Biometric Authentication Technology has been widely used in this information age. As one of the most important technology of authentication, finger vein recognition attracts our attention because of its high security, reliable accuracy and excellent performance. However, the current finger vein recognition system is difficult to be applied widely because its complicated image pre-processing and not representative feature vectors. To solve this problem, a finger vein recognition method based on the convolution neural network (CNN is proposed in the paper. The image samples are directly input into the CNN model to extract its feature vector so that we can make authentication by comparing the Euclidean distance between these vectors. Finally, the Deep Learning Framework Caffe is adopted to verify this method. The result shows that there are great improvements in both speed and accuracy rate compared to the previous research. And the model has nice robustness in illumination and rotation.

  19. Fully convolutional network with cluster for semantic segmentation

    Science.gov (United States)

    Ma, Xiao; Chen, Zhongbi; Zhang, Jianlin

    2018-04-01

    At present, image semantic segmentation technology has been an active research topic for scientists in the field of computer vision and artificial intelligence. Especially, the extensive research of deep neural network in image recognition greatly promotes the development of semantic segmentation. This paper puts forward a method based on fully convolutional network, by cluster algorithm k-means. The cluster algorithm using the image's low-level features and initializing the cluster centers by the super-pixel segmentation is proposed to correct the set of points with low reliability, which are mistakenly classified in great probability, by the set of points with high reliability in each clustering regions. This method refines the segmentation of the target contour and improves the accuracy of the image segmentation.

  20. Real Time Eye Detector with Cascaded Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Bin Li

    2018-01-01

    Full Text Available An accurate and efficient eye detector is essential for many computer vision applications. In this paper, we present an efficient method to evaluate the eye location from facial images. First, a group of candidate regions with regional extreme points is quickly proposed; then, a set of convolution neural networks (CNNs is adopted to determine the most likely eye region and classify the region as left or right eye; finally, the center of the eye is located with other CNNs. In the experiments using GI4E, BioID, and our datasets, our method attained a detection accuracy which is comparable to existing state-of-the-art methods; meanwhile, our method was faster and adaptable to variations of the images, including external light changes, facial occlusion, and changes in image modality.

  1. Deep learning with convolutional neural network in radiology.

    Science.gov (United States)

    Yasaka, Koichiro; Akai, Hiroyuki; Kunimatsu, Akira; Kiryu, Shigeru; Abe, Osamu

    2018-04-01

    Deep learning with a convolutional neural network (CNN) is gaining attention recently for its high performance in image recognition. Images themselves can be utilized in a learning process with this technique, and feature extraction in advance of the learning process is not required. Important features can be automatically learned. Thanks to the development of hardware and software in addition to techniques regarding deep learning, application of this technique to radiological images for predicting clinically useful information, such as the detection and the evaluation of lesions, etc., are beginning to be investigated. This article illustrates basic technical knowledge regarding deep learning with CNNs along the actual course (collecting data, implementing CNNs, and training and testing phases). Pitfalls regarding this technique and how to manage them are also illustrated. We also described some advanced topics of deep learning, results of recent clinical studies, and the future directions of clinical application of deep learning techniques.

  2. Static facial expression recognition with convolution neural networks

    Science.gov (United States)

    Zhang, Feng; Chen, Zhong; Ouyang, Chao; Zhang, Yifei

    2018-03-01

    Facial expression recognition is a currently active research topic in the fields of computer vision, pattern recognition and artificial intelligence. In this paper, we have developed a convolutional neural networks (CNN) for classifying human emotions from static facial expression into one of the seven facial emotion categories. We pre-train our CNN model on the combined FER2013 dataset formed by train, validation and test set and fine-tune on the extended Cohn-Kanade database. In order to reduce the overfitting of the models, we utilized different techniques including dropout and batch normalization in addition to data augmentation. According to the experimental result, our CNN model has excellent classification performance and robustness for facial expression recognition.

  3. DeepNAT: Deep convolutional neural network for segmenting neuroanatomy.

    Science.gov (United States)

    Wachinger, Christian; Reuter, Martin; Klein, Tassilo

    2018-04-15

    We introduce DeepNAT, a 3D Deep convolutional neural network for the automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance images. DeepNAT is an end-to-end learning-based approach to brain segmentation that jointly learns an abstract feature representation and a multi-class classification. We propose a 3D patch-based approach, where we do not only predict the center voxel of the patch but also neighbors, which is formulated as multi-task learning. To address a class imbalance problem, we arrange two networks hierarchically, where the first one separates foreground from background, and the second one identifies 25 brain structures on the foreground. Since patches lack spatial context, we augment them with coordinates. To this end, we introduce a novel intrinsic parameterization of the brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. As network architecture, we use three convolutional layers with pooling, batch normalization, and non-linearities, followed by fully connected layers with dropout. The final segmentation is inferred from the probabilistic output of the network with a 3D fully connected conditional random field, which ensures label agreement between close voxels. The roughly 2.7million parameters in the network are learned with stochastic gradient descent. Our results show that DeepNAT compares favorably to state-of-the-art methods. Finally, the purely learning-based method may have a high potential for the adaptation to young, old, or diseased brains by fine-tuning the pre-trained network with a small training sample on the target application, where the availability of larger datasets with manual annotations may boost the overall segmentation accuracy in the future. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. FULLY CONVOLUTIONAL NETWORKS FOR GROUND CLASSIFICATION FROM LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    A. Rizaldy

    2018-05-01

    Full Text Available Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs. In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN, a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher. The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  5. Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds

    Science.gov (United States)

    Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.

    2018-05-01

    Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  6. Color encoding in biologically-inspired convolutional neural networks.

    Science.gov (United States)

    Rafegas, Ivet; Vanrell, Maria

    2018-05-11

    Convolutional Neural Networks have been proposed as suitable frameworks to model biological vision. Some of these artificial networks showed representational properties that rival primate performances in object recognition. In this paper we explore how color is encoded in a trained artificial network. It is performed by estimating a color selectivity index for each neuron, which allows us to describe the neuron activity to a color input stimuli. The index allows us to classify whether they are color selective or not and if they are of a single or double color. We have determined that all five convolutional layers of the network have a large number of color selective neurons. Color opponency clearly emerges in the first layer, presenting 4 main axes (Black-White, Red-Cyan, Blue-Yellow and Magenta-Green), but this is reduced and rotated as we go deeper into the network. In layer 2 we find a denser hue sampling of color neurons and opponency is reduced almost to one new main axis, the Bluish-Orangish coinciding with the dataset bias. In layers 3, 4 and 5 color neurons are similar amongst themselves, presenting different type of neurons that detect specific colored objects (e.g., orangish faces), specific surrounds (e.g., blue sky) or specific colored or contrasted object-surround configurations (e.g. blue blob in a green surround). Overall, our work concludes that color and shape representation are successively entangled through all the layers of the studied network, revealing certain parallelisms with the reported evidences in primate brains that can provide useful insight into intermediate hierarchical spatio-chromatic representations. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Fully convolutional neural networks improve abdominal organ segmentation

    Science.gov (United States)

    Bobo, Meg F.; Bao, Shunxing; Huo, Yuankai; Yao, Yuang; Virostko, Jack; Plassard, Andrew J.; Lyu, Ilwoo; Assad, Albert; Abramson, Richard G.; Hilmes, Melissa A.; Landman, Bennett A.

    2018-03-01

    Abdominal image segmentation is a challenging, yet important clinical problem. Variations in body size, position, and relative organ positions greatly complicate the segmentation process. Historically, multi-atlas methods have achieved leading results across imaging modalities and anatomical targets. However, deep learning is rapidly overtaking classical approaches for image segmentation. Recently, Zhou et al. showed that fully convolutional networks produce excellent results in abdominal organ segmentation of computed tomography (CT) scans. Yet, deep learning approaches have not been applied to whole abdomen magnetic resonance imaging (MRI) segmentation. Herein, we evaluate the applicability of an existing fully convolutional neural network (FCNN) designed for CT imaging to segment abdominal organs on T2 weighted (T2w) MRI's with two examples. In the primary example, we compare a classical multi-atlas approach with FCNN on forty-five T2w MRI's acquired from splenomegaly patients with five organs labeled (liver, spleen, left kidney, right kidney, and stomach). Thirty-six images were used for training while nine were used for testing. The FCNN resulted in a Dice similarity coefficient (DSC) of 0.930 in spleens, 0.730 in left kidneys, 0.780 in right kidneys, 0.913 in livers, and 0.556 in stomachs. The performance measures for livers, spleens, right kidneys, and stomachs were significantly better than multi-atlas (p < 0.05, Wilcoxon rank-sum test). In a secondary example, we compare the multi-atlas approach with FCNN on 138 distinct T2w MRI's with manually labeled pancreases (one label). On the pancreas dataset, the FCNN resulted in a median DSC of 0.691 in pancreases versus 0.287 for multi-atlas. The results are highly promising given relatively limited training data and without specific training of the FCNN model and illustrate the potential of deep learning approaches to transcend imaging modalities. 1

  8. Limitations of a convolution method for modeling geometric uncertainties in radiation therapy. I. The effect of shift invariance

    International Nuclear Information System (INIS)

    Craig, Tim; Battista, Jerry; Van Dyk, Jake

    2003-01-01

    Convolution methods have been used to model the effect of geometric uncertainties on dose delivery in radiation therapy. Convolution assumes shift invariance of the dose distribution. Internal inhomogeneities and surface curvature lead to violations of this assumption. The magnitude of the error resulting from violation of shift invariance is not well documented. This issue is addressed by comparing dose distributions calculated using the Convolution method with dose distributions obtained by Direct Simulation. A comparison of conventional Static dose distributions was also made with Direct Simulation. This analysis was performed for phantom geometries and several clinical tumor sites. A modification to the Convolution method to correct for some of the inherent errors is proposed and tested using example phantoms and patients. We refer to this modified method as the Corrected Convolution. The average maximum dose error in the calculated volume (averaged over different beam arrangements in the various phantom examples) was 21% with the Static dose calculation, 9% with Convolution, and reduced to 5% with the Corrected Convolution. The average maximum dose error in the calculated volume (averaged over four clinical examples) was 9% for the Static method, 13% for Convolution, and 3% for Corrected Convolution. While Convolution can provide a superior estimate of the dose delivered when geometric uncertainties are present, the violation of shift invariance can result in substantial errors near the surface of the patient. The proposed Corrected Convolution modification reduces errors near the surface to 3% or less

  9. Non-Gaussian lineshapes and dynamics of time-resolved linear and nonlinear (correlation) spectra.

    Science.gov (United States)

    Dinpajooh, Mohammadhasan; Matyushov, Dmitry V

    2014-07-17

    Signatures of nonlinear and non-Gaussian dynamics in time-resolved linear and nonlinear (correlation) 2D spectra are analyzed in a model considering a linear plus quadratic dependence of the spectroscopic transition frequency on a Gaussian nuclear coordinate of the thermal bath (quadratic coupling). This new model is contrasted to the commonly assumed linear dependence of the transition frequency on the medium nuclear coordinates (linear coupling). The linear coupling model predicts equality between the Stokes shift and equilibrium correlation functions of the transition frequency and time-independent spectral width. Both predictions are often violated, and we are asking here the question of whether a nonlinear solvent response and/or non-Gaussian dynamics are required to explain these observations. We find that correlation functions of spectroscopic observables calculated in the quadratic coupling model depend on the chromophore's electronic state and the spectral width gains time dependence, all in violation of the predictions of the linear coupling models. Lineshape functions of 2D spectra are derived assuming Ornstein-Uhlenbeck dynamics of the bath nuclear modes. The model predicts asymmetry of 2D correlation plots and bending of the center line. The latter is often used to extract two-point correlation functions from 2D spectra. The dynamics of the transition frequency are non-Gaussian. However, the effect of non-Gaussian dynamics is limited to the third-order (skewness) time correlation function, without affecting the time correlation functions of higher order. The theory is tested against molecular dynamics simulations of a model polar-polarizable chromophore dissolved in a force field water.

  10. A frequency bin-wise nonlinear masking algorithm in convolutive mixtures for speech segregation.

    Science.gov (United States)

    Chi, Tai-Shih; Huang, Ching-Wen; Chou, Wen-Sheng

    2012-05-01

    A frequency bin-wise nonlinear masking algorithm is proposed in the spectrogram domain for speech segregation in convolutive mixtures. The contributive weight from each speech source to a time-frequency unit of the mixture spectrogram is estimated by a nonlinear function based on location cues. For each sound source, a non-binary mask is formed from the estimated weights and is multiplied to the mixture spectrogram to extract the sound. Head-related transfer functions (HRTFs) are used to simulate convolutive sound mixtures perceived by listeners. Simulation results show our proposed method outperforms convolutive independent component analysis and degenerate unmixing and estimation technique methods in almost all test conditions.

  11. Development and application of deep convolutional neural network in target detection

    Science.gov (United States)

    Jiang, Xiaowei; Wang, Chunping; Fu, Qiang

    2018-04-01

    With the development of big data and algorithms, deep convolution neural networks with more hidden layers have more powerful feature learning and feature expression ability than traditional machine learning methods, making artificial intelligence surpass human level in many fields. This paper first reviews the development and application of deep convolutional neural networks in the field of object detection in recent years, then briefly summarizes and ponders some existing problems in the current research, and the future development of deep convolutional neural network is prospected.

  12. Primordial non-Gaussianity from LAMOST surveys

    International Nuclear Information System (INIS)

    Gong Yan; Wang Xin; Chen Xuelei; Zheng Zheng

    2010-01-01

    The primordial non-Gaussianity (PNG) in the matter density perturbation is a very powerful probe of the physics of the very early Universe. The local PNG can induce a distinct scale-dependent bias on the large scale structure distribution of galaxies and quasars, which could be used for constraining it. We study the detection limits of PNG from the surveys of the LAMOST telescope. The cases of the main galaxy survey, the luminous red galaxy (LRG) survey, and the quasar survey of different magnitude limits are considered. We find that the Main1 sample (i.e. the main galaxy survey which is one magnitude deeper than the SDSS main galaxy survey, or r NL are |f NL | NL | NL | is between 50 and 103, depending on the magnitude limit of the survey. With Planck-like priors on cosmological parameters, the quasar survey with g NL | < 43 (2σ). We also discuss the possibility of further tightening the constraint by using the relative bias method proposed by Seljak.

  13. Bayesian nonparametric adaptive control using Gaussian processes.

    Science.gov (United States)

    Chowdhary, Girish; Kingravi, Hassan A; How, Jonathan P; Vela, Patricio A

    2015-03-01

    Most current model reference adaptive control (MRAC) methods rely on parametric adaptive elements, in which the number of parameters of the adaptive element are fixed a priori, often through expert judgment. An example of such an adaptive element is radial basis function networks (RBFNs), with RBF centers preallocated based on the expected operating domain. If the system operates outside of the expected operating domain, this adaptive element can become noneffective in capturing and canceling the uncertainty, thus rendering the adaptive controller only semiglobal in nature. This paper investigates a Gaussian process-based Bayesian MRAC architecture (GP-MRAC), which leverages the power and flexibility of GP Bayesian nonparametric models of uncertainty. The GP-MRAC does not require the centers to be preallocated, can inherently handle measurement noise, and enables MRAC to handle a broader set of uncertainties, including those that are defined as distributions over functions. We use stochastic stability arguments to show that GP-MRAC guarantees good closed-loop performance with no prior domain knowledge of the uncertainty. Online implementable GP inference methods are compared in numerical simulations against RBFN-MRAC with preallocated centers and are shown to provide better tracking and improved long-term learning.

  14. Boltzmann-Gaussian transition under specific noise effect

    International Nuclear Information System (INIS)

    Anh, Chu Thuy; Lan, Nguyen Tri; Viet, Nguyen Ai

    2014-01-01

    It is observed that a short time data set of market returns presents almost symmetric Boltzmann distribution whereas a long time data set tends to show a Gaussian distribution. To understand this universal phenomenon, many hypotheses which are spreading in a wide range of interdisciplinary research were proposed. In current work, the effects of background fluctuations on symmetric Boltzmann distribution is investigated. The numerical calculation is performed to show that the Gaussian noise may cause the transition from initial Boltzmann distribution to Gaussian one. The obtained results would reflect non-dynamic nature of the transition under consideration.

  15. Legendre Duality of Spherical and Gaussian Spin Glasses

    International Nuclear Information System (INIS)

    Genovese, Giuseppe; Tantari, Daniele

    2015-01-01

    The classical result of concentration of the Gaussian measure on the sphere in the limit of large dimension induces a natural duality between Gaussian and spherical models of spin glass. We analyse the Legendre variational structure linking the free energies of these two systems, in the spirit of the equivalence of ensembles of statistical mechanics. Our analysis, combined with the previous work (Barra et al., J. Phys. A: Math. Theor. 47, 155002, 2014), shows that such models are replica symmetric. Lastly, we briefly discuss an application of our result to the study of the Gaussian Hopfield model

  16. Controllable gaussian-qubit interface for extremal quantum state engineering.

    Science.gov (United States)

    Adesso, Gerardo; Campbell, Steve; Illuminati, Fabrizio; Paternostro, Mauro

    2010-06-18

    We study state engineering through bilinear interactions between two remote qubits and two-mode gaussian light fields. The attainable two-qubit states span the entire physically allowed region in the entanglement-versus-global-purity plane. Two-mode gaussian states with maximal entanglement at fixed global and marginal entropies produce maximally entangled two-qubit states in the corresponding entropic diagram. We show that a small set of parameters characterizing extremally entangled two-mode gaussian states is sufficient to control the engineering of extremally entangled two-qubit states, which can be realized in realistic matter-light scenarios.

  17. Legendre Duality of Spherical and Gaussian Spin Glasses

    Energy Technology Data Exchange (ETDEWEB)

    Genovese, Giuseppe, E-mail: giuseppe.genovese@math.uzh.ch [Universität Zürich, Institut für Mathematik (Switzerland); Tantari, Daniele, E-mail: daniele.tantari@sns.it [Scuola Normale Superiore di Pisa, Centro Ennio de Giorgi (Italy)

    2015-12-15

    The classical result of concentration of the Gaussian measure on the sphere in the limit of large dimension induces a natural duality between Gaussian and spherical models of spin glass. We analyse the Legendre variational structure linking the free energies of these two systems, in the spirit of the equivalence of ensembles of statistical mechanics. Our analysis, combined with the previous work (Barra et al., J. Phys. A: Math. Theor. 47, 155002, 2014), shows that such models are replica symmetric. Lastly, we briefly discuss an application of our result to the study of the Gaussian Hopfield model.

  18. Methods to characterize non-Gaussian noise in TAMA

    International Nuclear Information System (INIS)

    Ando, Masaki; Arai, K; Takahashi, R; Tatsumi, D; Beyersdorf, P; Kawamura, S; Miyoki, S; Mio, N; Moriwaki, S; Numata, K; Kanda, N; Aso, Y; Fujimoto, M-K; Tsubono, K; Kuroda, K

    2003-01-01

    We present a data characterization method for the main output signal of the interferometric gravitational-wave detector, in particular targeting at effective detection of burst gravitational waves from stellar core collapse. The time scale of non-Gaussian events is evaluated in this method, and events with longer time scale than real signals are rejected as non-Gaussian noises. As a result of data analysis using 1000 h of real data with the interferometric gravitational-wave detector TAMA300, the false-alarm rate was improved 10 3 times with this non-Gaussian noise evaluation and rejection method

  19. Coincidence Imaging and interference with coherent Gaussian beams

    Institute of Scientific and Technical Information of China (English)

    CAI Yang-jian; ZHU Shi-yao

    2006-01-01

    we present a theoretical study of coincidence imaging and interference with coherent Gaussian beams The equations for the coincidence image formation and interference fringes are derived,from which it is clear that the imaging is due to the corresponding focusing in the two paths .The quality and visibility of the images and fringes can be high simultaneously.The nature of the coincidence imaging and interference between quantum entangled photon pairs and coherent Gaussian beams are different .The coincidence image with coherent Gaussian beams is due to intensity-intensity correspondence,a classical nature,while that with entangled photon pairs is due to the amplitude correlation a quantum nature.

  20. Quantum Teamwork for Unconditional Multiparty Communication with Gaussian States

    Science.gov (United States)

    Zhang, Jing; Adesso, Gerardo; Xie, Changde; Peng, Kunchi

    2009-08-01

    We demonstrate the capability of continuous variable Gaussian states to communicate multipartite quantum information. A quantum teamwork protocol is presented according to which an arbitrary possibly entangled multimode state can be faithfully teleported between two teams each comprising many cooperative users. We prove that N-mode Gaussian weighted graph states exist for arbitrary N that enable unconditional quantum teamwork implementations for any arrangement of the teams. These perfect continuous variable maximally multipartite entangled resources are typical among pure Gaussian states and are unaffected by the entanglement frustration occurring in multiqubit states.