WorldWideScience

Sample records for kaiser-bessel convolution kernel

  1. Kaiser-Bessel Basis for the Particle-Mesh Interpolation

    CERN Document Server

    Gao, Xingyu; Wang, Han

    2016-01-01

    In this work, we introduce the Kaiser-Bessel interpolation basis for the particle-mesh interpolation in the fast Ewald method. A reliable a priori error estimate is developed to measure the accuracy of the force computation, and is shown to be effective in optimizing the shape parameter of the Kaiser-Bessel basis in terms of accuracy. By comparing the optimized Kaiser-Bessel basis with the traditional B-spline basis, we demonstrate that the former is more accurate than the latter in part of the working parameter space, saying a relatively small real space cutoff, a relatively small reciprocal space mesh and a relatively large truncation of basis. In some cases, the Kaiser-Bessel basis is found to be more than one order of magnitude more accurate. Therefore, it is worth trying the Kaiser-Bessel basis in the simulations where the computational accuracy of the electrostatic interaction is critical.

  2. Kaiser-Bessel basis for particle-mesh interpolation

    Science.gov (United States)

    Gao, Xingyu; Fang, Jun; Wang, Han

    2017-06-01

    In this work, we introduce the Kaiser-Bessel interpolation basis for the particle-mesh interpolation in the fast Ewald method. A reliable a priori error estimate is developed to measure the accuracy of the force computation in correlated charge systems, and is shown to be effective in optimizing the shape parameter of the Kaiser-Bessel basis in terms of accuracy. By comparing the optimized Kaiser-Bessel basis with the traditional B -spline basis, we demonstrate that the former is more accurate than the latter in part of the working parameter space, say, a relatively small real-space cutoff, a relatively small reciprocal space mesh, and a relatively large truncation of basis. In some cases, the Kaiser-Bessel basis is found to be more than one order of magnitude more accurate.

  3. Convolution kernels for multi-wavelength imaging

    National Research Council Canada - National Science Library

    Boucaud, Alexandre; Bocchio, Marco; Abergel, Alain; Orieux, François; Dole, Hervé; Hadj-Youcef, Mohamed Amine

    2016-01-01

    .... Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been...

  4. Convolution kernels for multi-wavelength imaging

    Science.gov (United States)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  5. Image quality of mixed convolution kernel in thoracic computed tomography.

    Science.gov (United States)

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  6. Convolution kernels for multi-wavelength imaging

    CERN Document Server

    Boucaud, Alexandre; Abergel, Alain; Orieux, François; Dole, Hervé; Hadj-Youcef, Mohamed Amine

    2016-01-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as PSF, that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assumin...

  7. Convolution kernel design and efficient algorithm for sampling density correction.

    Science.gov (United States)

    Johnson, Kenneth O; Pipe, James G

    2009-02-01

    Sampling density compensation is an important step in non-cartesian image reconstruction. One of the common techniques to determine weights that compensate for differences in sampling density involves a convolution. A new convolution kernel is designed for sampling density attempting to minimize the error in a fully reconstructed image. The resulting weights obtained using this new kernel are compared with various previous methods, showing a reduction in reconstruction error. A computationally efficient algorithm is also presented that facilitates the calculation of the convolution of finite kernels. Both the kernel and the algorithm are extended to 3D. Copyright 2009 Wiley-Liss, Inc.

  8. Relative n-widths of periodic convolution classes with NCVD-kernel and B-kernel

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In this paper,we consider the relative n-widths of two kinds of periodic convolution classes,Kp(K) and Bp(G),whose convolution kernels are NCVD-kernel K and B-kernel G. The asymptotic estimations of Kn(Kp(K),Kp(K))q and Kn(Bp(G),Bp(G))q are obtained for p=1 and ∞,1≤ q≤∞.

  9. Selection of convolution kernel in non-uniform fast Fourier transform for Fourier domain optical coherence tomography.

    Science.gov (United States)

    Chan, Kenny K H; Tang, Shuo

    2011-12-19

    Gridding based non-uniform fast Fourier transform (NUFFT) has recently been shown as an efficient method of processing non-linearly sampled data from Fourier-domain optical coherence tomography (FD-OCT). This method requires selecting design parameters, such as kernel function type, oversampling ratio and kernel width, to balance between computational complexity and accuracy. The Kaiser-Bessel (KB) and Gaussian kernels have been used independently on the NUFFT algorithm for FD-OCT. This paper compares the reconstruction error and speed for the optimization of these design parameters and justifies particular kernel choice for FD-OCT applications. It is found that for on-the-fly computation of the kernel function, the simpler Gaussian function offers a better accuracy-speed tradeoff. The KB kernel, however, is a better choice in the pre-computed kernel mode of NUFFT, in which the processing speed is no longer dependent on the kernel function type. Finally, the algorithm is used to reconstruct in-vivo images of a human finger at a camera limited 50k A-line/s.

  10. GPU Acceleration of Image Convolution using Spatially-varying Kernel

    OpenAIRE

    Hartung, Steven; Shukla, Hemant; Miller, J. Patrick; Pennypacker, Carlton

    2012-01-01

    Image subtraction in astronomy is a tool for transient object discovery such as asteroids, extra-solar planets and supernovae. To match point spread functions (PSFs) between images of the same field taken at different times a convolution technique is used. Particularly suitable for large-scale images is a computationally intensive spatially-varying kernel. The underlying algorithm is inherently massively parallel due to unique kernel generation at every pixel location. The spatially-varying k...

  11. Super-resolution reconstruction algorithm based on adaptive convolution kernel size selection

    Science.gov (United States)

    Gao, Hang; Chen, Qian; Sui, Xiubao; Zeng, Junjie; Zhao, Yao

    2016-09-01

    Restricted by the detector technology and optical diffraction limit, the spatial resolution of infrared imaging system is difficult to achieve significant improvement. Super-Resolution (SR) reconstruction algorithm is an effective way to solve this problem. Among them, the SR algorithm based on multichannel blind deconvolution (MBD) estimates the convolution kernel only by low resolution observation images, according to the appropriate regularization constraints introduced by a priori assumption, to realize the high resolution image restoration. The algorithm has been shown effective when each channel is prime. In this paper, we use the significant edges to estimate the convolution kernel and introduce an adaptive convolution kernel size selection mechanism, according to the uncertainty of the convolution kernel size in MBD processing. To reduce the interference of noise, we amend the convolution kernel in an iterative process, and finally restore a clear image. Experimental results show that the algorithm can meet the convergence requirement of the convolution kernel estimation.

  12. Photon beam convolution using polyenergetic energy deposition kernels

    Energy Technology Data Exchange (ETDEWEB)

    Hoban, P.W.; Murray, D.C.; Round, W.H. (Waikato Univ., Hamilton (New Zealand). Dept. of Physics)

    1994-04-01

    In photon beam convolution calculations where polyenergetic energy deposition kernels (EDKs) are used, the primary photon energy spectrum should be correctly accounted for in Monte Carlo generation of EDKs. This requires the probability of interaction, determined by the linear attenuation coefficient, [mu], to be taken into account when primary photon interactions are forced to occur at the EDK origin. The use of primary and scattered EDKs generated with a fixed photon spectrum can give rise to an error in the dose calculation due to neglecting the effects of beam hardening with depth. The proportion of primary photon energy that is transferred to secondary electrons increases with depth of interaction, due to the increase in the ratio [mu][sub ab]/[mu] as the beam hardens. Convolution depth-dose curves calculated using polyenergetic EDKs generated for the primary photon spectra which exist at depths of 0, 20 and 40 cm in water, show a fall-off which is too steep when compared with EGS4 Monte Carlo results. A beam hardening correction factor applied to primary and scattered 0 cm EDKs, based on the ratio of kerma to terma at each depth, gives primary, scattered and total dose in good agreement with Monte Carlo results. (Author).

  13. Implementation of large kernel 2-D convolution in limited FPGA resource

    Science.gov (United States)

    Zhong, Sheng; Li, Yang; Yan, Luxin; Zhang, Tianxu; Cao, Zhiguo

    2007-12-01

    2-D Convolution is a simple mathematical operation which is fundamental to many common image processing operators. Using FPGA to implement the convolver can greatly reduce the DSP's heavy burden in signal processing. But with the limit resource the FPGA can implement a convolver with small 2-D kernel. In this paper, An FIFO type line delayer is presented to serve as the data buffer for convolution to reduce the data fetching operation. A finite state machine is applied to control the reuse of multipliers and adders arrays. With these two techniques, a resource limited FPGA can be used to implement a larger kernel convolver which is commonly used in image process systems.

  14. Anatomically informed convolution kernels for the projection of fMRI data on the cortical surface.

    Science.gov (United States)

    Operto, Grégory; Bulot, Rémy; Anton, Jean-Luc; Coulon, Olivier

    2006-01-01

    We present here a method that aims at producing representations of functional brain data on the cortical surface from functional MRI volumes. Such representations are required for subsequent cortical-based functional analysis. We propose a projection technique based on the definition, around each node of the grey/white matter interface mesh, of convolution kernels whose shape and distribution rely on the geometry of the local anatomy. For one anatomy, a set of convolution kernels is computed that can be used to project any functional data registered with this anatomy. The method is presented together with experiments on synthetic data and real statistical t-maps.

  15. ARKCoS: Artifact-Suppressed Accelerated Radial Kernel Convolution on the Sphere

    CERN Document Server

    Elsner, Franz

    2011-01-01

    We describe a hybrid Fourier/direct space convolution algorithm for compact radial (azimuthally symmetric) kernels on the sphere. For high resolution maps covering a large fraction of the sky, our implementation takes advantage of the inexpensive massive parallelism afforded by consumer graphics processing units (GPUs). Applications involve modeling of instrumental beam shapes in terms of compact kernels, computation of fine-scale wavelet transformations, and optimal filtering for the detection of point sources. Our algorithm works for any pixelization where pixels are grouped into isolatitude rings. Even for kernels that are not bandwidth limited, ringing features are completely absent on an ECP grid. We demonstrate that they can be highly suppressed on the popular HEALPix pixelization, for which we develop a freely available implementation of the algorithm. As an example application, we show that running on a high-end consumer graphics card our method speeds up beam convolution for simulations of a characte...

  16. Producing data-based sensitivity kernels from convolution and correlation in exploration geophysics.

    Science.gov (United States)

    Chmiel, M. J.; Roux, P.; Herrmann, P.; Rondeleux, B.

    2016-12-01

    Many studies have shown that seismic interferometry can be used to estimate surface wave arrivals by correlation of seismic signals recorded at a pair of locations. In the case of ambient noise sources, the convergence towards the surface wave Green's functions is obtained with the criterion of equipartitioned energy. However, seismic acquisition with active, controlled sources gives more possibilities when it comes to interferometry. The use of controlled sources makes it possible to recover the surface wave Green's function between two points using either correlation or convolution. We investigate the convolutional and correlational approaches using land active-seismic data from exploration geophysics. The data were recorded on 10,710 vertical receivers using 51,808 sources (seismic vibrator trucks). The sources spacing is the same in both X and Y directions (30 m) which is known as a "carpet shooting". The receivers are placed in parallel lines with a spacing 150 m in the X direction and 30 m in the Y direction. Invoking spatial reciprocity between sources and receivers, correlation and convolution functions can thus be constructed between either pairs of receivers or pairs of sources. Benefiting from the dense acquisition, we extract sensitivity kernels from correlation and convolution measurements of the seismic data. These sensitivity kernels are subsequently used to produce phase-velocity dispersion curves between two points and to separate the higher mode from the fundamental mode for surface waves. Potential application to surface wave cancellation is also envisaged.

  17. On the relation between Kaiser-Bessel blob and tube of response based modelling of the system matrix in iterative PET image reconstruction.

    Science.gov (United States)

    Lougovski, Alexandr; Hofheinz, Frank; Maus, Jens; Schramm, Georg; van den Hoff, Jörg

    2015-05-21

    We investigate the question of how the blob approach is related to tube of response based modelling of the system matrix. In our model, the tube of response (TOR) is approximated as a cylinder with constant density (TOR-CD) and the cubic voxels are replaced by spheres. Here we investigate a modification of the TOR model that makes it effectively equivalent to the blob model, which models the intersection of lines of response (LORs) with radially variant basis functions ('blobs') replacing the cubic voxels. Implications of the achieved equivalence regarding the necessity of final resampling in blob-based reconstructions are considered. We extended TOR-CD to a variable density tube model (TOR-VD) that yields a weighting function (defining all system matrix elements) which is essentially identical to that of the blob model. The variable density of TOR-VD was modelled by a Gaussian and a Kaiser-Bessel function, respectively. The free parameters of both model functions were determined by fitting the corresponding weighting function to the weighting function of the blob model. TOR-CD and the best-fitting TOR-VD were compared to the blob model with a final resampling step (BLOB-RS) and without resampling (BLOB-NRS) in phantom studies. For three different contrast ratios and two different voxel sizes, resolution noise curves were generated. TOR-VD and BLOB-NRS lead to nearly identical images for all investigated contrast ratios and voxel sizes. Both models showed strong Gibbs artefacts at 4 mm voxel size, while at 2 mm voxel size there were no Gibbs artefacts visible. The spatial resolution was similar to the resolution with TOR-CD in all cases. The resampling step removed most of the Gibbs artefacts and reduced the noise level but also degraded the spatial resolution substantially. We conclude that the blob model can be considered just as a special case of a TOR-based reconstruction. The latter approach provides a more natural description of the detection process and

  18. Using hybrid GPU/CPU kernel splitting to accelerate spherical convolutions

    Science.gov (United States)

    Sutter, P. M.; Wandelt, B. D.; Elsner, F.

    2015-06-01

    We present a general method for accelerating by more than an order of magnitude the convolution of pixelated functions on the sphere with a radially-symmetric kernel. Our method splits the kernel into a compact real-space component and a compact spherical harmonic space component. These components can then be convolved in parallel using an inexpensive commodity GPU and a CPU. We provide models for the computational cost of both real-space and Fourier space convolutions and an estimate for the approximation error. Using these models we can determine the optimum split that minimizes the wall clock time for the convolution while satisfying the desired error bounds. We apply this technique to the problem of simulating a cosmic microwave background (CMB) anisotropy sky map at the resolution typical of the high resolution maps produced by the Planck mission. For the main Planck CMB science channels we achieve a speedup of over a factor of ten, assuming an acceptable fractional rms error of order 10-5 in the power spectrum of the output map.

  19. Projection of fMRI data onto the cortical surface using anatomically-informed convolution kernels.

    Science.gov (United States)

    Operto, G; Bulot, R; Anton, J-L; Coulon, O

    2008-01-01

    As surface-based data analysis offer an attractive approach for intersubject matching and comparison, the projection of voxel-based 3D volumes onto the cortical surface is an essential problem. We present here a method that aims at producing representations of functional brain data on the cortical surface from functional MRI volumes. Such representations are for instance required for subsequent cortical-based functional analysis. We propose a projection technique based on the definition, around each node of the gray/white matter interface mesh, of convolution kernels whose shape and distribution rely on the geometry of the local anatomy. For one anatomy, a set of convolution kernels is computed that can be used to project any functional data registered with this anatomy. Therefore resulting in anatomically-informed projections of data onto the cortical surface, this kernel-based approach offers better sensitivity, specificity than other classical methods and robustness to misregistration errors. Influences of mesh and volumes spatial resolutions were also estimated for various projection techniques, using simulated functional maps.

  20. Nonstationary, Nonparametric, Nonseparable Bayesian Spatio-Temporal Modeling using Kernel Convolution of Order Based Dependent Dirichlet Process

    OpenAIRE

    Das, Moumita; Bhattacharya, Sourabh

    2014-01-01

    In this paper, using kernel convolution of order based dependent Dirichlet process (Griffin & Steel (2006)) we construct a nonstationary, nonseparable, nonparametric space-time process, which, as we show, satisfies desirable properties, and includes the stationary, separable, parametric processes as special cases. We also investigate the smoothness properties of our proposed model. Since our model entails an infinite random series, for Bayesian model fitting purpose we must either truncate th...

  1. ASIC-based architecture for the real-time computation of 2D convolution with large kernel size

    Science.gov (United States)

    Shao, Rui; Zhong, Sheng; Yan, Luxin

    2015-12-01

    Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.

  2. SU-E-T-423: Fast Photon Convolution Calculation with a 3D-Ideal Kernel On the GPU

    Energy Technology Data Exchange (ETDEWEB)

    Moriya, S; Sato, M [Komazawa University, Setagaya, Tokyo (Japan); Tachibana, H [National Cancer Center Hospital East, Kashiwa, Chiba (Japan)

    2015-06-15

    Purpose: The calculation time is a trade-off for improving the accuracy of convolution dose calculation with fine calculation spacing of the KERMA kernel. We investigated to accelerate the convolution calculation using an ideal kernel on the Graphic Processing Units (GPU). Methods: The calculation was performed on the AMD graphics hardware of Dual FirePro D700 and our algorithm was implemented using the Aparapi that convert Java bytecode to OpenCL. The process of dose calculation was separated with the TERMA and KERMA steps. The dose deposited at the coordinate (x, y, z) was determined in the process. In the dose calculation running on the central processing unit (CPU) of Intel Xeon E5, the calculation loops were performed for all calculation points. On the GPU computation, all of the calculation processes for the points were sent to the GPU and the multi-thread computation was done. In this study, the dose calculation was performed in a water equivalent homogeneous phantom with 150{sup 3} voxels (2 mm calculation grid) and the calculation speed on the GPU to that on the CPU and the accuracy of PDD were compared. Results: The calculation time for the GPU and the CPU were 3.3 sec and 4.4 hour, respectively. The calculation speed for the GPU was 4800 times faster than that for the CPU. The PDD curve for the GPU was perfectly matched to that for the CPU. Conclusion: The convolution calculation with the ideal kernel on the GPU was clinically acceptable for time and may be more accurate in an inhomogeneous region. Intensity modulated arc therapy needs dose calculations for different gantry angles at many control points. Thus, it would be more practical that the kernel uses a coarse spacing technique if the calculation is faster while keeping the similar accuracy to a current treatment planning system.

  3. Surface EMG decomposition based on K-means clustering and convolution kernel compensation.

    Science.gov (United States)

    Ning, Yong; Zhu, Xiangjun; Zhu, Shanan; Zhang, Yingchun

    2015-03-01

    A new approach has been developed by combining the K-mean clustering (KMC) method and a modified convolution kernel compensation (CKC) method for multichannel surface electromyogram (EMG) decomposition. The KMC method was first utilized to cluster vectors of observations at different time instants and then estimate the initial innervation pulse train (IPT). The CKC method, modified with a novel multistep iterative process, was conducted to update the estimated IPT. The performance of the proposed K-means clustering-Modified CKC (KmCKC) approach was evaluated by reconstructing IPTs from both simulated and experimental surface EMG signals. The KmCKC approach successfully reconstructed all 10 IPTs from the simulated surface EMG signals with true positive rates (TPR) of over 90% with a low signal-to-noise ratio (SNR) of -10 dB. More than 10 motor units were also successfully extracted from the 64-channel experimental surface EMG signals of the first dorsal interosseous (FDI) muscles when a contraction force was held at 8 N by using the KmCKC approach. A "two-source" test was further conducted with 64-channel surface EMG signals. The high percentage of common MUs and common pulses (over 92% at all force levels) between the IPTs reconstructed from the two independent groups of surface EMG signals demonstrates the reliability and capability of the proposed KmCKC approach in multichannel surface EMG decomposition. Results from both simulated and experimental data are consistent and confirm that the proposed KmCKC approach can successfully reconstruct IPTs with high accuracy at different levels of contraction.

  4. IMAGE DE-BLURRING USING WIENER DE-CONVOLUTION AND WAVELET FOR DIFFERENT BLURRING KERNEL

    OpenAIRE

    M.Tech Research Scholar Shuchi Singh*, Asst Professor Vipul Awasthi, Asst Professor NitinSahu

    2016-01-01

    Image de-convolution is an active research area of recovering a sharp image after blurring by a convolution. One of the problems in image de-convolution is how to preserve the texture structures while removing blur in presence of noise. Various methods have been used for such as gradient based methods, sparsity based methods, and nonlocal self-similarity methods. In this thesis, we have used the conventional non-blind method of Wiener de-convolution. Further Wavelet denoising has been used to...

  5. Convolution/deconvolution of generalized Gaussian kernels with applications to proton/photon physics and electron capture of charged particles

    CERN Document Server

    Ulmer, W

    2012-01-01

    Scatter processes of photons lead to blurring of images produced by CT (computed tomography) or CBCT (cone beam computed tomography) in the KV domain or portal imaging in the MV domain (KV: kilovolt age, MV: megavoltage). Multiple scatter is described by, at least, one Gaussian kernel. In various situations, this approximation is crude, and we need two/three Gaussian kernels to account for the long-range tails (Landau tails), which appear in the Moli\\`ere scatter of protons, energy straggling and electron capture of charged particles passing through matter and Compton scatter of photons. The ideal image (source function) is subjected to Gaussian convolutions to yield a blurred image recorded by a detector array. The inverse problem is to obtain the ideal source image from measured image. Deconvolution methods of linear combinations of two/three Gaussian kernels with different parameters s0, s1, s2 can be derived via an inhomogeneous Fredholm integral equation of second kind (IFIE2) and Liouville - Neumann ser...

  6. Evaluation to Obtain the Image According to the Spatial Domain Filtering of Various Convolution Kernels in the Multi-Detector Row Computed Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hoo Min [Dept. of Radiologic Technology, Dongnam Health College, Suwon (Korea, Republic of); Yoo, Beong Gyu [Dept. of Radiologic Technology, Wonkwang Health Science College, Iksan (Korea, Republic of); Kweon, Dae Cheol [Dept. of Radiology, Seoul National University, Seoul (Korea, Republic of)

    2008-03-15

    Our objective was to evaluate the image of spatial domain filtering as an alternative to additional image reconstruction using different kernels in MDCT. Derived from thin collimated source images were generated using water phantom and abdomen B10(very smooth), B20(smooth), B30(medium smooth), B40 (medium), B50(medium sharp), B60(sharp), B70(very sharp) and B80(ultra sharp) kernels. MTF and spatial resolution measured with various convolution kernels. Quantitative CT attenuation coefficient and noise measurements provided comparable HU(Hounsfield) units in this respect. CT attenuation coefficient(mean HU) values in the water were values in the water were 1.1{approx}1.8 HU, air(-998{approx}-1000 HU) and noise in the water(5.4{approx}44.8 HU), air(3.6{approx}31.4 HU). In the abdominal fat a CT attenuation coefficient(-2.2{approx}0.8 HU) and noise(10.1{approx}82.4 HU) was measured. In the abdominal was CT attenuation coefficient(53.3{approx}54.3 HU) and noise(10.4{approx}70.7 HU) in the muscle and in the liver parenchyma of CT attenuation coefficient(60.4{approx}62.2 HU) and noise (7.6{approx}63.8 HU) in the liver parenchyma. Image reconstructed with a convolution kernel led to an increase in noise, whereas the results for CT attenuation coefficient were comparable. Image scanned with a high convolution kernel(B80) led to an increase in noise, whereas the results for CT attenuation coefficient were comparable. Image medications of image sharpness and noise eliminate the need for reconstruction using different kernels in the future. Adjusting CT various kernels, which should be adjusted to take into account the kernels of the CT undergoing the examination, may control CT images increase the diagnostic accuracy.

  7. Effects of contrast-enhancement, reconstruction slice thickness and convolution kernel on the diagnostic performance of radiomics signature in solitary pulmonary nodule.

    Science.gov (United States)

    He, Lan; Huang, Yanqi; Ma, Zelan; Liang, Cuishan; Liang, Changhong; Liu, Zaiyi

    2016-10-10

    The Effects of contrast-enhancement, reconstruction slice thickness and convolution kernel on the diagnostic performance of radiomics signature in solitary pulmonary nodule (SPN) remains unclear. 240 patients with SPNs (malignant, n = 180; benign, n = 60) underwent non-contrast CT (NECT) and contrast-enhanced CT (CECT) which were reconstructed with different slice thickness and convolution kernel. 150 radiomics features were extracted separately from each set of CT and diagnostic performance of each feature were assessed. After feature selection and radiomics signature construction, diagnostic performance of radiomics signature for discriminating benign and malignant SPN was also assessed with respect to the discrimination and classification and compared with net reclassification improvement (NRI). Our results showed NECT-based radiomics signature demonstrated better discrimination and classification capability than CECT in both primary (AUC: 0.862 vs. 0.829, p = 0.032; NRI = 0.578) and validation cohort (AUC: 0.750 vs. 0.735, p = 0.014; NRI = 0.023). Thin-slice (1.25 mm) CT-based radiomics signature had better diagnostic performance than thick-slice CT (5 mm) in both primary (AUC: 0.862 vs. 0.785, p = 0.015; NRI = 0.867) and validation cohort (AUC: 0.750 vs. 0.725, p = 0.025; NRI = 0.467). Standard convolution kernel-based radiomics signature had better diagnostic performance than lung convolution kernel-based CT in both primary (AUC: 0.785 vs. 0.770, p = 0.015; NRI = 0.156) and validation cohort (AUC: 0.725 vs.0.686, p = 0.039; NRI = 0.467). Therefore, this study indicates that the contrast-enhancement, reconstruction slice thickness and convolution kernel can affect the diagnostic performance of radiomics signature in SPN, of which non-contrast, thin-slice and standard convolution kernel-based CT is more informative.

  8. Effects of contrast-enhancement, reconstruction slice thickness and convolution kernel on the diagnostic performance of radiomics signature in solitary pulmonary nodule

    Science.gov (United States)

    He, Lan; Huang, Yanqi; Ma, Zelan; Liang, Cuishan; Liang, Changhong; Liu, Zaiyi

    2016-01-01

    The Effects of contrast-enhancement, reconstruction slice thickness and convolution kernel on the diagnostic performance of radiomics signature in solitary pulmonary nodule (SPN) remains unclear. 240 patients with SPNs (malignant, n = 180; benign, n = 60) underwent non-contrast CT (NECT) and contrast-enhanced CT (CECT) which were reconstructed with different slice thickness and convolution kernel. 150 radiomics features were extracted separately from each set of CT and diagnostic performance of each feature were assessed. After feature selection and radiomics signature construction, diagnostic performance of radiomics signature for discriminating benign and malignant SPN was also assessed with respect to the discrimination and classification and compared with net reclassification improvement (NRI). Our results showed NECT-based radiomics signature demonstrated better discrimination and classification capability than CECT in both primary (AUC: 0.862 vs. 0.829, p = 0.032; NRI = 0.578) and validation cohort (AUC: 0.750 vs. 0.735, p = 0.014; NRI = 0.023). Thin-slice (1.25 mm) CT-based radiomics signature had better diagnostic performance than thick-slice CT (5 mm) in both primary (AUC: 0.862 vs. 0.785, p = 0.015; NRI = 0.867) and validation cohort (AUC: 0.750 vs. 0.725, p = 0.025; NRI = 0.467). Standard convolution kernel-based radiomics signature had better diagnostic performance than lung convolution kernel-based CT in both primary (AUC: 0.785 vs. 0.770, p = 0.015; NRI = 0.156) and validation cohort (AUC: 0.725 vs.0.686, p = 0.039; NRI = 0.467). Therefore, this study indicates that the contrast-enhancement, reconstruction slice thickness and convolution kernel can affect the diagnostic performance of radiomics signature in SPN, of which non-contrast, thin-slice and standard convolution kernel-based CT is more informative. PMID:27721474

  9. Progressive FastICA Peel-Off and Convolution Kernel Compensation Demonstrate High Agreement for High Density Surface EMG Decomposition

    Science.gov (United States)

    Chen, Maoqi

    2016-01-01

    Decomposition of electromyograms (EMG) is a key approach to investigating motor unit plasticity. Various signal processing techniques have been developed for high density surface EMG decomposition, among which the convolution kernel compensation (CKC) has achieved high decomposition yield with extensive validation. Very recently, a progressive FastICA peel-off (PFP) framework has also been developed for high density surface EMG decomposition. In this study, the CKC and PFP methods were independently applied to decompose the same sets of high density surface EMG signals. Across 91 trials of 64-channel surface EMG signals recorded from the first dorsal interosseous (FDI) muscle of 9 neurologically intact subjects, there were a total of 1477 motor units identified from the two methods, including 969 common motor units. On average, 10.6 ± 4.3 common motor units were identified from each trial, which showed a very high matching rate of 97.85 ± 1.85% in their discharge instants. The high degree of agreement of common motor units from the CKC and the PFP processing provides supportive evidence of the decomposition accuracy for both methods. The different motor units obtained from each method also suggest that combination of the two methods may have the potential to further increase the decomposition yield. PMID:27642525

  10. Progressive FastICA Peel-Off and Convolution Kernel Compensation Demonstrate High Agreement for High Density Surface EMG Decomposition.

    Science.gov (United States)

    Chen, Maoqi; Holobar, Ales; Zhang, Xu; Zhou, Ping

    2016-01-01

    Decomposition of electromyograms (EMG) is a key approach to investigating motor unit plasticity. Various signal processing techniques have been developed for high density surface EMG decomposition, among which the convolution kernel compensation (CKC) has achieved high decomposition yield with extensive validation. Very recently, a progressive FastICA peel-off (PFP) framework has also been developed for high density surface EMG decomposition. In this study, the CKC and PFP methods were independently applied to decompose the same sets of high density surface EMG signals. Across 91 trials of 64-channel surface EMG signals recorded from the first dorsal interosseous (FDI) muscle of 9 neurologically intact subjects, there were a total of 1477 motor units identified from the two methods, including 969 common motor units. On average, 10.6 ± 4.3 common motor units were identified from each trial, which showed a very high matching rate of 97.85 ± 1.85% in their discharge instants. The high degree of agreement of common motor units from the CKC and the PFP processing provides supportive evidence of the decomposition accuracy for both methods. The different motor units obtained from each method also suggest that combination of the two methods may have the potential to further increase the decomposition yield.

  11. Progressive FastICA Peel-Off and Convolution Kernel Compensation Demonstrate High Agreement for High Density Surface EMG Decomposition

    Directory of Open Access Journals (Sweden)

    Maoqi Chen

    2016-01-01

    Full Text Available Decomposition of electromyograms (EMG is a key approach to investigating motor unit plasticity. Various signal processing techniques have been developed for high density surface EMG decomposition, among which the convolution kernel compensation (CKC has achieved high decomposition yield with extensive validation. Very recently, a progressive FastICA peel-off (PFP framework has also been developed for high density surface EMG decomposition. In this study, the CKC and PFP methods were independently applied to decompose the same sets of high density surface EMG signals. Across 91 trials of 64-channel surface EMG signals recorded from the first dorsal interosseous (FDI muscle of 9 neurologically intact subjects, there were a total of 1477 motor units identified from the two methods, including 969 common motor units. On average, 10.6±4.3 common motor units were identified from each trial, which showed a very high matching rate of 97.85±1.85% in their discharge instants. The high degree of agreement of common motor units from the CKC and the PFP processing provides supportive evidence of the decomposition accuracy for both methods. The different motor units obtained from each method also suggest that combination of the two methods may have the potential to further increase the decomposition yield.

  12. Numerical Evaluation of the "Dual-Kernel Counter-flow" Matric Convolution Integral that Arises in Discrete/Continuous (D/C) Control Theory

    Science.gov (United States)

    Nixon, Douglas D.

    2009-01-01

    Discrete/Continuous (D/C) control theory is a new generalized theory of discrete-time control that expands the concept of conventional (exact) discrete-time control to create a framework for design and implementation of discretetime control systems that include a continuous-time command function generator so that actuator commands need not be constant between control decisions, but can be more generally defined and implemented as functions that vary with time across sample period. Because the plant/control system construct contains two linear subsystems arranged in tandem, a novel dual-kernel counter-flow convolution integral appears in the formulation. As part of the D/C system design and implementation process, numerical evaluation of that integral over the sample period is required. Three fundamentally different evaluation methods and associated algorithms are derived for the constant-coefficient case. Numerical results are matched against three available examples that have closed-form solutions.

  13. Nonlocal Optical Spatial Soliton with a Non-parabolic Symmetry and Real-valued Convolution Response Kernel

    Institute of Scientific and Technical Information of China (English)

    LI Ke-Ping; YU Chao-Fan; GAO Zi-You; LIANG Guo-Dong; YU Xiao-Min

    2008-01-01

    Based on the picture of nonlinear and non-parabolic symmetry response, I.e., △n2( I) ≈ p(αo -α1x- α2x2), we propose a model for the transversal beam intensity distribution of the nonlocal spatial soliton. In this model, as a convolution response with non-parabolic symmetry, △n2( I) ≈ p(b0+b1 f - b2 f2 with b2/b1 > 0 is assumed. Furthermore, instead of the wave function Ψ, the high-order nonlinear equation for the beam intensity distribution f has been derived and the bell-shaped soliton solution with the envelope form has been obtained. The results demonstrate that, since the existence of the terms of non-parabolic response, the nonlocal spatial soliton has the bistable state solution. If thefrequency shift of wave number β satisfies 0 0 has been demonstrated.

  14. T-PHOT version 2.0: improved algorithms for background subtraction, local convolution, kernel registration, and new options

    CERN Document Server

    Merlin, E; Castellano, M; Ferguson, H C; Wang, T; Derriere, S; Dunlop, J S; Elbaz, D; Fontana, A

    2016-01-01

    We present the new release v2.0 of T-PHOT, a publicly available software package developed to perform PSF-matched, prior-based, multiwavelength deconfusion photometry of extragalactic fields. New features included in the code are presented and discussed: background estimation, fitting using position dependent kernels, flux prioring, diagnostical statistics on the residual image, exclusion of selected sources from the model and residual images, individual registration of fitted objects. These new options improve on the performance of the code, allowing for more accurate results and providing useful aids for diagnostics.

  15. T-PHOT version 2.0: Improved algorithms for background subtraction, local convolution, kernel registration, and new options

    Science.gov (United States)

    Merlin, E.; Bourne, N.; Castellano, M.; Ferguson, H. C.; Wang, T.; Derriere, S.; Dunlop, J. S.; Elbaz, D.; Fontana, A.

    2016-11-01

    Aims: We present the new release - version 2.0 - of t-phot, a publicly available software package developed to perform PSF-matched, prior-based, multiwavelength deconfusion photometry of extragalactic fields. Methods: New features included in the code are presented and discussed: background estimation, fitting using position dependent kernels, flux prioring, diagnostical statistics on the residual image, exclusion of selected sources from the model and residual images, and individual registration of fitted objects. Results: The new options improve on the performance of the code, allowing for more accurate results and providing useful aids for diagnostics.

  16. Parallel Multi Channel Convolution using General Matrix Multiplication

    OpenAIRE

    VASUDEVAN, ARAVIND; Anderson, Andrew; Gregg, David

    2017-01-01

    Convolutional neural networks (CNNs) have emerged as one of the most successful machine learning technologies for image and video processing. The most computationally intensive parts of CNNs are the convolutional layers, which convolve multi-channel images with multiple kernels. A common approach to implementing convolutional layers is to expand the image into a column matrix (im2col) and perform Multiple Channel Multiple Kernel (MCMK) convolution using an existing parallel General Matrix Mul...

  17. The convolution transform

    CERN Document Server

    Hirschman, Isidore Isaac

    2005-01-01

    In studies of general operators of the same nature, general convolution transforms are immediately encountered as the objects of inversion. The relation between differential operators and integral transforms is the basic theme of this work, which is geared toward upper-level undergraduates and graduate students. It may be read easily by anyone with a working knowledge of real and complex variable theory. Topics include the finite and non-finite kernels, variation diminishing transforms, asymptotic behavior of kernels, real inversion theory, representation theory, the Weierstrass transform, and

  18. A guide to convolution arithmetic for deep learning

    OpenAIRE

    Dumoulin, Vincent; Visin, Francesco

    2016-01-01

    We introduce a guide to help deep learning practitioners understand and manipulate convolutional neural network architectures. The guide clarifies the relationship between various properties (input shape, kernel shape, zero padding, strides and output shape) of convolutional, pooling and transposed convolutional layers, as well as the relationship between convolutional and transposed convolutional layers. Relationships are derived for various cases, and are illustrated in order to make them i...

  19. 面向纳米电路的改进型卷积核可制造性模型建模研究∗%Improved convolution kernel based DFM mo del for nano-scale circuits

    Institute of Scientific and Technical Information of China (English)

    杨祎巍; 张宏博; 李斌

    2015-01-01

    囿于材料和工艺稳定性等原因,纳米级集成电路制造依然基于193 nm激发光的工艺,光刻波长远大于版图尺寸,使得制造中光的干涉和衍射现象极大降低了分辨率,影响了芯片质量,因此版图在制造前需要使用可制造性模型进行查错。传统模型对制造过程进行物理建模,通过对模型中的矩阵进行分解得到卷积核,所使用的物理模型不仅复杂,而且应用难度高,加之还有物理模型缺失的情况,因此难以描述具有上千参数的生产线。本文使用卷积的形式作为可制造性模型的框架,通过优化算法提取版图到硅片轮廓这一过程的信息并以卷积核的形式体现出来,卷积核中的每一个元素均为根据已知的生产线输入输出数据优化得出,是描述制造过程的一个维度。该模型克服了传统模型需要工艺参数等机密信息的缺陷,同时具有更强的描述制造过程的能力;模型甚至可以包含版图校正信息,描述从版图到硅片轮廓这一全流程。该模型在65 nm工艺下的实验结果表明该模型具有8 nm的精度。%Limited by materials and process stability, the nano-scale IC manufacturing process is still based on the 193 nm light technology and the wavelength is larger than the feature size of layout, thus the induced interference and diffraction greatly reduce the resolution, which affect the quality of the chip. So the layout needs to be checked by the design-for-manufacturability (DfM) model before manufacturing. Traditional DfM models describe the process steps using physical models, and deduce the convolution kernels by decomposing the matrix in corresponding physical models, which are not only complicated but also hard to use; thus combined with the insufficiency of physical models, it is difficult to describe the process with thousands of parameters. This paper uses convolution form as the framework of DfM model, and deduces

  20. Convolution theorems: partitioning the space of integral transforms

    Science.gov (United States)

    Lindsey, Alan R.; Suter, Bruce W.

    1999-03-01

    Investigating a number of different integral transforms uncovers distinct patterns in the type of translation convolution theorems afforded by each. It is shown that transforms based on separable kernels (aka Fourier, Laplace and their relatives) have a form of the convolution theorem providing for a transform domain product of the convolved functions. However, transforms based on kernels not separable in the function and transform variables mandate a convolution theorem of a different type; namely in the transform domain the convolution becomes another convolution--one function with the transform of the other.

  1. Invariant Scattering Convolution Networks

    CERN Document Server

    Bruna, Joan

    2012-01-01

    A wavelet scattering network computes a translation invariant image representation, which is stable to deformations and preserves high frequency information for classification. It cascades wavelet transform convolutions with non-linear modulus and averaging operators. The first network layer outputs SIFT-type descriptors whereas the next layers provide complementary invariant information which improves classification. The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having the same Fourier power spectrum. State of the art classification results are obtained for handwritten digits and texture discrimination, using a Gaussian kernel SVM and a generative PCA classifier.

  2. On the Diamond Bessel Heat Kernel

    Directory of Open Access Journals (Sweden)

    Wanchak Satsanit

    2011-01-01

    Full Text Available We study the heat equation in n dimensional by Diamond Bessel operator. We find the solution by method of convolution and Fourier transform in distribution theory and also obtain an interesting kernel related to the spectrum and the kernel which is called Bessel heat kernel.

  3. General logarithmic image processing convolution.

    Science.gov (United States)

    Palomares, Jose M; González, Jesús; Ros, Eduardo; Prieto, Alberto

    2006-11-01

    The logarithmic image processing model (LIP) is a robust mathematical framework, which, among other benefits, behaves invariantly to illumination changes. This paper presents, for the first time, two general formulations of the 2-D convolution of separable kernels under the LIP paradigm. Although both formulations are mathematically equivalent, one of them has been designed avoiding the operations which are computationally expensive in current computers. Therefore, this fast LIP convolution method allows to obtain significant speedups and is more adequate for real-time processing. In order to support these statements, some experimental results are shown in Section V.

  4. Quasi-Convolution Pyramidal Blurring

    OpenAIRE

    Kraus, Martin

    2008-01-01

    Efficient image blurring techniques based on the pyramid algorithm can be implemented on modern graphics hardware; thus, image blurring with arbitrary blur width is possible in real time even for large images. However, pyramidal blurring methods do not achieve the image quality provided by convolution filters; in particular, the shape of the corresponding filter kernel varies locally, which potentially results in objectionable rendering artifacts. In this work, a new analysis filter is design...

  5. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  6. Astronomical Image Subtraction by Cross-Convolution

    Science.gov (United States)

    Yuan, Fang; Akerlof, Carl W.

    2008-04-01

    In recent years, there has been a proliferation of wide-field sky surveys to search for a variety of transient objects. Using relatively short focal lengths, the optics of these systems produce undersampled stellar images often marred by a variety of aberrations. As participants in such activities, we have developed a new algorithm for image subtraction that no longer requires high-quality reference images for comparison. The computational efficiency is comparable with similar procedures currently in use. The general technique is cross-convolution: two convolution kernels are generated to make a test image and a reference image separately transform to match as closely as possible. In analogy to the optimization technique for generating smoothing splines, the inclusion of an rms width penalty term constrains the diffusion of stellar images. In addition, by evaluating the convolution kernels on uniformly spaced subimages across the total area, these routines can accommodate point-spread functions that vary considerably across the focal plane.

  7. Adaptive wiener image restoration kernel

    Science.gov (United States)

    Yuan, Ding

    2007-06-05

    A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.

  8. Scattering correction based on regularization de-convolution for Cone-Beam CT

    CERN Document Server

    Xie, Shi-peng

    2016-01-01

    In Cone-Beam CT (CBCT) imaging systems, the scattering phenomenon has a significant impact on the reconstructed image and is a long-lasting research topic on CBCT. In this paper, we propose a simple, novel and fast approach for mitigating scatter artifacts and increasing the image contrast in CBCT, belonging to the category of convolution-based method in which the projected data is de-convolved with a convolution kernel. A key step in this method is how to determine the convolution kernel. Compared with existing methods, the estimation of convolution kernel is based on bi-l1-l2-norm regularization imposed on both the intermediate the known scatter contaminated projection images and the convolution kernel. Our approach can reduce the scatter artifacts from 12.930 to 2.133.

  9. Interpolation by two-dimensional cubic convolution

    Science.gov (United States)

    Shi, Jiazheng; Reichenbach, Stephen E.

    2003-08-01

    This paper presents results of image interpolation with an improved method for two-dimensional cubic convolution. Convolution with a piecewise cubic is one of the most popular methods for image reconstruction, but the traditional approach uses a separable two-dimensional convolution kernel that is based on a one-dimensional derivation. The traditional, separable method is sub-optimal for the usual case of non-separable images. The improved method in this paper implements the most general non-separable, two-dimensional, piecewise-cubic interpolator with constraints for symmetry, continuity, and smoothness. The improved method of two-dimensional cubic convolution has three parameters that can be tuned to yield maximal fidelity for specific scene ensembles characterized by autocorrelation or power-spectrum. This paper illustrates examples for several scene models (a circular disk of parametric size, a square pulse with parametric rotation, and a Markov random field with parametric spatial detail) and actual images -- presenting the optimal parameters and the resulting fidelity for each model. In these examples, improved two-dimensional cubic convolution is superior to several other popular small-kernel interpolation methods.

  10. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  11. Estimates of Oseen kernels in weighted $L^{p}$ spaces

    OpenAIRE

    KRAČMAR, Stanislav; Novotný, Antonín; Pokorný, Milan

    2001-01-01

    We study convolutions with Oseen kernels (weakly singular and singular) in both two- and three-dimensional space. We give a detailed weighted $L^{p}$ theory for $ p\\in(1;\\infty$ ] for anisotropic weights.

  12. Difference image analysis: Automatic kernel design using information criteria

    CERN Document Server

    Bramich, D M; Alsubai, K A; Bachelet, E; Mislis, D; Parley, N

    2015-01-01

    We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially-invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularisation. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unreg...

  13. Uncertainty estimation by convolution using spatial statistics.

    Science.gov (United States)

    Sanchez-Brea, Luis Miguel; Bernabeu, Eusebio

    2006-10-01

    Kriging has proven to be a useful tool in image processing since it behaves, under regular sampling, as a convolution. Convolution kernels obtained with kriging allow noise filtering and include the effects of the random fluctuations of the experimental data and the resolution of the measuring devices. The uncertainty at each location of the image can also be determined using kriging. However, this procedure is slow since, currently, only matrix methods are available. In this work, we compare the way kriging performs the uncertainty estimation with the standard statistical technique for magnitudes without spatial dependence. As a result, we propose a much faster technique, based on the variogram, to determine the uncertainty using a convolutional procedure. We check the validity of this approach by applying it to one-dimensional images obtained in diffractometry and two-dimensional images obtained by shadow moire.

  14. Filters, reproducing kernel, and adaptive meshfree method

    Science.gov (United States)

    You, Y.; Chen, J.-S.; Lu, H.

    Reproducing kernel, with its intrinsic feature of moving averaging, can be utilized as a low-pass filter with scale decomposition capability. The discrete convolution of two nth order reproducing kernels with arbitrary support size in each kernel results in a filtered reproducing kernel function that has the same reproducing order. This property is utilized to separate the numerical solution into an unfiltered lower order portion and a filtered higher order portion. As such, the corresponding high-pass filter of this reproducing kernel filter can be used to identify the locations of high gradient, and consequently serves as an operator for error indication in meshfree analysis. In conjunction with the naturally conforming property of the reproducing kernel approximation, a meshfree adaptivity method is also proposed.

  15. Two-dimensional cubic convolution.

    Science.gov (United States)

    Reichenbach, Stephen E; Geng, Frank

    2003-01-01

    The paper develops two-dimensional (2D), nonseparable, piecewise cubic convolution (PCC) for image interpolation. Traditionally, PCC has been implemented based on a one-dimensional (1D) derivation with a separable generalization to two dimensions. However, typical scenes and imaging systems are not separable, so the traditional approach is suboptimal. We develop a closed-form derivation for a two-parameter, 2D PCC kernel with support [-2,2] x [-2,2] that is constrained for continuity, smoothness, symmetry, and flat-field response. Our analyses, using several image models, including Markov random fields, demonstrate that the 2D PCC yields small improvements in interpolation fidelity over the traditional, separable approach. The constraints on the derivation can be relaxed to provide greater flexibility and performance.

  16. Approximating W projection as a separable kernel

    OpenAIRE

    Merry, Bruce

    2015-01-01

    W projection is a commonly-used approach to allow interferometric imaging to be accelerated by Fast Fourier Transforms (FFTs), but it can require a huge amount of storage for convolution kernels. The kernels are not separable, but we show that they can be closely approximated by separable kernels. The error scales with the fourth power of the field of view, and so is small enough to be ignored at mid to high frequencies. We also show that hybrid imaging algorithms combining W projection with ...

  17. Approximating W projection as a separable kernel

    Science.gov (United States)

    Merry, Bruce

    2016-02-01

    W projection is a commonly used approach to allow interferometric imaging to be accelerated by fast Fourier transforms, but it can require a huge amount of storage for convolution kernels. The kernels are not separable, but we show that they can be closely approximated by separable kernels. The error scales with the fourth power of the field of view, and so is small enough to be ignored at mid- to high frequencies. We also show that hybrid imaging algorithms combining W projection with either faceting, snapshotting, or W stacking allow the error to be made arbitrarily small, making the approximation suitable even for high-resolution wide-field instruments.

  18. Scattering correction based on regularization de-convolution for Cone-Beam CT

    OpenAIRE

    Xie, Shi-peng; Yan, Rui-ju

    2016-01-01

    In Cone-Beam CT (CBCT) imaging systems, the scattering phenomenon has a significant impact on the reconstructed image and is a long-lasting research topic on CBCT. In this paper, we propose a simple, novel and fast approach for mitigating scatter artifacts and increasing the image contrast in CBCT, belonging to the category of convolution-based method in which the projected data is de-convolved with a convolution kernel. A key step in this method is how to determine the convolution kernel. Co...

  19. Compressing Convolutional Neural Networks

    OpenAIRE

    Chen, Wenlin; Wilson, James T.; Tyree, Stephen; Weinberger, Kilian Q.; Chen, Yixin

    2015-01-01

    Convolutional neural networks (CNN) are increasingly used in many areas of computer vision. They are particularly attractive because of their ability to "absorb" great quantities of labeled data through millions of parameters. However, as model sizes increase, so do the storage and memory requirements of the classifiers. We present a novel network architecture, Frequency-Sensitive Hashed Nets (FreshNets), which exploits inherent redundancy in both convolutional layers and fully-connected laye...

  20. Image interpolation by two-dimensional parametric cubic convolution.

    Science.gov (United States)

    Shi, Jiazheng; Reichenbach, Stephen E

    2006-07-01

    Cubic convolution is a popular method for image interpolation. Traditionally, the piecewise-cubic kernel has been derived in one dimension with one parameter and applied to two-dimensional (2-D) images in a separable fashion. However, images typically are statistically nonseparable, which motivates this investigation of nonseparable cubic convolution. This paper derives two new nonseparable, 2-D cubic-convolution kernels. The first kernel, with three parameters (designated 2D-3PCC), is the most general 2-D, piecewise-cubic interpolator defined on [-2, 2] x [-2, 2] with constraints for biaxial symmetry, diagonal (or 90 degrees rotational) symmetry, continuity, and smoothness. The second kernel, with five parameters (designated 2D-5PCC), relaxes the constraint of diagonal symmetry, based on the observation that many images have rotationally asymmetric statistical properties. This paper also develops a closed-form solution for determining the optimal parameter values for parametric cubic-convolution kernels with respect to ensembles of scenes characterized by autocorrelation (or power spectrum). This solution establishes a practical foundation for adaptive interpolation based on local autocorrelation estimates. Quantitative fidelity analyses and visual experiments indicate that these new methods can outperform several popular interpolation methods. An analysis of the error budgets for reconstruction error associated with blurring and aliasing illustrates that the methods improve interpolation fidelity for images with aliased components. For images with little or no aliasing, the methods yield results similar to other popular methods. Both 2D-3PCC and 2D-5PCC are low-order polynomials with small spatial support and so are easy to implement and efficient to apply.

  1. Trainable Convolution Filters and Their Application to Face Recognition.

    Science.gov (United States)

    Kumar, Ritwik; Banerjee, Arunava; Vemuri, Baba C; Pfister, Hanspeter

    2012-07-01

    In this paper, we present a novel image classification system that is built around a core of trainable filter ensembles that we call Volterra kernel classifiers. Our system treats images as a collection of possibly overlapping patches and is composed of three components: (1) A scheme for a single patch classification that seeks a smooth, possibly nonlinear, functional mapping of the patches into a range space, where patches of the same class are close to one another, while patches from different classes are far apart-in the L_2 sense. This mapping is accomplished using trainable convolution filters (or Volterra kernels) where the convolution kernel can be of any shape or order. (2) Given a corpus of Volterra classifiers with various kernel orders and shapes for each patch, a boosting scheme for automatically selecting the best weighted combination of the classifiers to achieve higher per-patch classification rate. (3) A scheme for aggregating the classification information obtained for each patch via voting for the parent image classification. We demonstrate the effectiveness of the proposed technique using face recognition as an application area and provide extensive experiments on the Yale, CMU PIE, Extended Yale B, Multi-PIE, and MERL Dome benchmark face data sets. We call the Volterra kernel classifiers applied to face recognition Volterrafaces. We show that our technique, which falls into the broad class of embedding-based face image discrimination methods, consistently outperforms various state-of-the-art methods in the same category.

  2. Convolution copula econometrics

    CERN Document Server

    Cherubini, Umberto; Mulinacci, Sabrina

    2016-01-01

    This book presents a novel approach to time series econometrics, which studies the behavior of nonlinear stochastic processes. This approach allows for an arbitrary dependence structure in the increments and provides a generalization with respect to the standard linear independent increments assumption of classical time series models. The book offers a solution to the problem of a general semiparametric approach, which is given by a concept called C-convolution (convolution of dependent variables), and the corresponding theory of convolution-based copulas. Intended for econometrics and statistics scholars with a special interest in time series analysis and copula functions (or other nonparametric approaches), the book is also useful for doctoral students with a basic knowledge of copula functions wanting to learn about the latest research developments in the field.

  3. Efficient convolutional sparse coding

    Energy Technology Data Exchange (ETDEWEB)

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  4. X-Y separable pyramid steerable scalable kernels

    OpenAIRE

    Shy, Douglas; Perona, Pietro

    1994-01-01

    A new method for generating X-Y separable, steerable, scalable approximations of filter kernels is proposed which is based on a generalization of the singular value decomposition (SVD) to three dimensions. This “pseudo-SVD” improves upon a previous scheme due to Perona (1992) in that it reduces convolution time and storage requirements. An adaptation of the pseudo-SVD is proposed to generate steerable and scalable kernels which are suitable for use with a Laplacian pyramid. The properties of ...

  5. Throughput Scaling Of Convolution For Error-Tolerant Multimedia Applications

    CERN Document Server

    Anam, Mohammad Ashraful

    2012-01-01

    Convolution and cross-correlation are the basis of filtering and pattern or template matching in multimedia signal processing. We propose two throughput scaling options for any one-dimensional convolution kernel in programmable processors by adjusting the imprecision (distortion) of computation. Our approach is based on scalar quantization, followed by two forms of tight packing in floating-point (one of which is proposed in this paper) that allow for concurrent calculation of multiple results. We illustrate how our approach can operate as an optional pre- and post-processing layer for off-the-shelf optimized convolution routines. This is useful for multimedia applications that are tolerant to processing imprecision and for cases where the input signals are inherently noisy (error tolerant multimedia applications). Indicative experimental results with a digital music matching system and an MPEG-7 audio descriptor system demonstrate that the proposed approach offers up to 175% increase in processing throughput...

  6. Spatially variant convolution with scaled B-splines.

    Science.gov (United States)

    Muñoz-Barrutia, Arrate; Artaechevarria, Xabier; Ortiz-de-Solorzano, Carlos

    2010-01-01

    We present an efficient algorithm to compute multidimensional spatially variant convolutions--or inner products--between N-dimensional signals and B-splines--or their derivatives--of any order and arbitrary sizes. The multidimensional B-splines are computed as tensor products of 1-D B-splines, and the input signal is expressed in a B-spline basis. The convolution is then computed by using an adequate combination of integration and scaled finite differences as to have, for moderate and large scale values, a computational complexity that does not depend on the scaling factor. To show in practice the benefit of using our spatially variant convolution approach, we present an adaptive noise filter that adjusts the kernel size to the local image characteristics and a high sensitivity local ridge detector.

  7. Maximum-likelihood estimation of circle parameters via convolution.

    Science.gov (United States)

    Zelniker, Emanuel E; Clarkson, I Vaughan L

    2006-04-01

    The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images.

  8. Convolution Operators on Groups

    CERN Document Server

    Derighetti, Antoine

    2011-01-01

    This volume is devoted to a systematic study of the Banach algebra of the convolution operators of a locally compact group. Inspired by classical Fourier analysis we consider operators on Lp spaces, arriving at a description of these operators and Lp versions of the theorems of Wiener and Kaplansky-Helson.

  9. Univalence for convolutions

    Directory of Open Access Journals (Sweden)

    Herb Silverman

    1996-01-01

    Full Text Available The radius of univalence is found for the convolution f∗g of functions f∈S (normalized univalent functions and g∈C (close-to-convex functions. A lower bound for the radius of univalence is also determined when f and g range over all of S. Finally, a characterization of C provides an inclusion relationship.

  10. Modelling of nonlinear bridge aerodynamics and aeroelasticity: a convolution based approach

    Directory of Open Access Journals (Sweden)

    Wu T.

    2012-07-01

    Full Text Available Innovative bridge decks exhibit nonlinear behaviour in wind tunnel studies which has placed increasing importance on the nonlinear bridge aerodynamics/aeroelasticity considerations for long-span bridges. The convolution scheme concerning the first-order kernels for linear analysis is reviewed, which is followed by an introduction to higher-order kernels for nonlinear analysis. A numerical example of a longspan suspension bridge is presented that demonstrates the efficacy of the proposed scheme.

  11. Discretization of continuous convolution operators for accurate modeling of wave propagation in digital holography.

    Science.gov (United States)

    Chacko, Nikhil; Liebling, Michael; Blu, Thierry

    2013-10-01

    Discretization of continuous (analog) convolution operators by direct sampling of the convolution kernel and use of fast Fourier transforms is highly efficient. However, it assumes the input and output signals are band-limited, a condition rarely met in practice, where signals have finite support or abrupt edges and sampling is nonideal. Here, we propose to approximate signals in analog, shift-invariant function spaces, which do not need to be band-limited, resulting in discrete coefficients for which we derive discrete convolution kernels that accurately model the analog convolution operator while taking into account nonideal sampling devices (such as finite fill-factor cameras). This approach retains the efficiency of direct sampling but not its limiting assumption. We propose fast forward and inverse algorithms that handle finite-length, periodic, and mirror-symmetric signals with rational sampling rates. We provide explicit convolution kernels for computing coherent wave propagation in the context of digital holography. When compared to band-limited methods in simulations, our method leads to fewer reconstruction artifacts when signals have sharp edges or when using nonideal sampling devices.

  12. Convolutional Goppa codes defined on fibrations

    CERN Document Server

    Curto, J I Iglesias; Martín, F J Plaza; Sotelo, G Serrano

    2010-01-01

    We define a new class of Convolutional Codes in terms of fibrations of algebraic varieties generalizaing our previous constructions of Convolutional Goppa Codes. Using this general construction we can give several examples of Maximum Distance Separable (MDS) Convolutional Codes.

  13. Convolutional coding techniques for data protection

    Science.gov (United States)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  14. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-04-11

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaickingand 4D light field view synthesis.

  15. Strongly-MDS convolutional codes

    NARCIS (Netherlands)

    Gluesing-Luerssen, H; Rosenthal, J; Smarandache, R

    2006-01-01

    Maximum-distance separable (MDS) convolutional codes have the property that their free distance is maximal among all codes of the same rate and the same degree. In this paper, a class of MDS convolutional codes is introduced whose column distances reach the generalized Singleton bound at the earlies

  16. Separating Underdetermined Convolutive Speech Mixtures

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Wang, DeLiang; Larsen, Jan

    2006-01-01

    a method for underdetermined blind source separation of convolutive mixtures. The proposed framework is applicable for separation of instantaneous as well as convolutive speech mixtures. It is possible to iteratively extract each speech signal from the mixture by combining blind source separation...

  17. Topological convolution algebras

    CERN Document Server

    Alpay, Daniel

    2012-01-01

    In this paper we introduce a new family of topological convolution algebras of the form $\\bigcup_{p\\in\\mathbb N} L_2(S,\\mu_p)$, where $S$ is a Borel semi-group in a locally compact group $G$, which carries an inequality of the type $\\|f*g\\|_p\\le A_{p,q}\\|f\\|_q\\|g\\|_p$ for $p > q+d$ where $d$ pre-assigned, and $A_{p,q}$ is a constant. We give a sufficient condition on the measures $\\mu_p$ for such an inequality to hold. We study the functional calculus and the spectrum of the elements of these algebras, and present two examples, one in the setting of non commutative stochastic distributions, and the other related to Dirichlet series.

  18. Subsampling Realised Kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    In a recent paper we have introduced the class of realised kernel estimators of the increments of quadratic variation in the presence of noise. We showed that this estimator is consistent and derived its limit distribution under various assumptions on the kernel weights. In this paper we extend our...... analysis, looking at the class of subsampled realised kernels and we derive the limit theory for this class of estimators. We find that subsampling is highly advantageous for estimators based on discontinuous kernels, such as the truncated kernel. For kinked kernels, such as the Bartlett kernel, we show...... that subsampling is impotent, in the sense that subsampling has no effect on the asymptotic distribution. Perhaps surprisingly, for the efficient smooth kernels, such as the Parzen kernel, we show that subsampling is harmful as it increases the asymptotic variance. We also study the performance of subsampled...

  19. Off-resonance artifacts correction with convolution in k-space (ORACLE).

    Science.gov (United States)

    Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne

    2012-06-01

    Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.

  20. Regularization techniques for PSF-matching kernels - I. Choice of kernel basis

    Science.gov (United States)

    Becker, A. C.; Homrighausen, D.; Connolly, A. J.; Genovese, C. R.; Owen, R.; Bickerton, S. J.; Lupton, R. H.

    2012-09-01

    We review current methods for building point spread function (PSF)-matching kernels for the purposes of image subtraction or co-addition. Such methods use a linear decomposition of the kernel on a series of basis functions. The correct choice of these basis functions is fundamental to the efficiency and effectiveness of the matching - the chosen bases should represent the underlying signal using a reasonably small number of shapes, and/or have a minimum number of user-adjustable tuning parameters. We examine methods whose bases comprise multiple Gauss-Hermite polynomials, as well as a form-free basis composed of delta-functions. Kernels derived from delta-functions are unsurprisingly shown to be more expressive; they are able to take more general shapes and perform better in situations where sum-of-Gaussian methods are known to fail. However, due to its many degrees of freedom (the maximum number allowed by the kernel size) this basis tends to overfit the problem and yields noisy kernels having large variance. We introduce a new technique to regularize these delta-function kernel solutions, which bridges the gap between the generality of delta-function kernels and the compactness of sum-of-Gaussian kernels. Through this regularization we are able to create general kernel solutions that represent the intrinsic shape of the PSF-matching kernel with only one degree of freedom, the strength of the regularization λ. The role of λ is effectively to exchange variance in the resulting difference image with variance in the kernel itself. We examine considerations in choosing the value of λ, including statistical risk estimators and the ability of the solution to predict solutions for adjacent areas. Both of these suggest moderate strengths of λ between 0.1 and 1.0, although this optimization is likely data set dependent. This model allows for flexible representations of the convolution kernel that have significant predictive ability and will prove useful in implementing

  1. Blind recognition of punctured convolutional codes

    Institute of Scientific and Technical Information of China (English)

    LU Peizhong; LI Shen; ZOU Yan; LUO Xiangyang

    2005-01-01

    This paper presents an algorithm for blind recognition of punctured convolutional codes which is an important problem in adaptive modulation and coding. For a given finite sequence of convolutional code, the parity check matrix of the convolutional code is first computed by solving a linear system with adequate error tolerance. Then a minimal basic encoding matrix of the original convolutional code and its puncturing pattern are determined according to the known parity check matrix of the punctured convolutional code.

  2. Meda Inequality for Rearrangements of the Convolution on the Heisenberg Group and Some Applications

    Directory of Open Access Journals (Sweden)

    V. S. Guliyev

    2009-01-01

    Full Text Available The Meda inequality for rearrangements of the convolution operator on the Heisenberg group ℍn is proved. By using the Meda inequality, an O'Neil-type inequality for the convolution is obtained. As applications of these results, some sufficient and necessary conditions for the boundedness of the fractional maximal operator MΩ,α and fractional integral operator IΩ,α with rough kernels in the spaces Lp(ℍn are found. Finally, we give some comments on the extension of our results to the case of homogeneous groups.

  3. Robust Kernel (Cross-) Covariance Operators in Reproducing Kernel Hilbert Space toward Kernel Methods

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2016-01-01

    To the best of our knowledge, there are no general well-founded robust methods for statistical unsupervised learning. Most of the unsupervised methods explicitly or implicitly depend on the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). They are sensitive to contaminated data, even when using bounded positive definite kernels. First, we propose robust kernel covariance operator (robust kernel CO) and robust kernel crosscovariance operator (robust kern...

  4. Nonextensive Entropic Kernels

    Science.gov (United States)

    2008-08-01

    Berg et al., 1984] has been used in a machine learning context by Cuturi and Vert [2005]. Definition 26 Let (X ,+) be a semigroup .2 A function ϕ : X...R is called pd (in the semigroup sense) if k : X × X → R, defined as k(x, y) = ϕ(x + y), is a pd kernel. Likewise, ϕ is called nd if k is a nd...kernel. Accordingly, these are called semigroup kernels. 7.3 Jensen-Shannon and Tsallis kernels The basic result that allows deriving pd kernels based on

  5. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.

  6. Kernel-Based Semantic Relation Detection and Classification via Enriched Parse Tree Structure

    Institute of Scientific and Technical Information of China (English)

    Guo-Dong Zhou; Qiao-Ming Zhu

    2011-01-01

    This paper proposes a tree kernel method of semantic relation detection and classification (RDC) between named entities. It resolves two critical problems in previous tree kernel methods of RDC. First, a new tree kernel is presented to better capture the inherent structural information in a parse tree by enabling the standard convolution tree kernel with context-sensitiveness and approximate matching of sub-trees. Second, an enriched parse tree structure is proposed to well derive necessary structural information, e.g., proper latent annotations, from a parse tree. Evaluation on the ACE RDC corpora shows that both the new tree kernel and the enriched parse tree structure contribute significantly to RDC and our tree kernel method much outperforms the state-of-the-art ones.

  7. Convolutional Network Coding Based on Matrix Power Series Representation

    CERN Document Server

    Guo, Wangmei; Sun, Qifu Tyler

    2011-01-01

    In this paper, convolutional network coding is formulated by means of matrix power series representation of the local encoding kernel (LEK) matrices and global encoding kernel (GEK) matrices to establish its theoretical fundamentals for practical implementations. From the encoding perspective, the GEKs of a convolutional network code (CNC) are shown to be uniquely determined by its LEK matrix $K(z)$ if $K_0$, the constant coefficient matrix of $K(z)$, is nilpotent. This will simplify the CNC design because a nilpotent $K_0$ suffices to guarantee a unique set of GEKs. Besides, the relation between coding topology and $K(z)$ is also discussed. From the decoding perspective, the main theme is to justify that the first $L+1$ terms of the GEK matrix $F(z)$ at a sink $r$ suffice to check whether the code is decodable at $r$ with delay $L$ and to start decoding if so. The concomitant decoding scheme avoids dealing with $F(z)$, which may contain infinite terms, as a whole and hence reduces the complexity of decodabil...

  8. Matrix convolution operators on groups

    CERN Document Server

    Chu, Cho-Ho

    2008-01-01

    In the last decade, convolution operators of matrix functions have received unusual attention due to their diverse applications. This monograph presents some new developments in the spectral theory of these operators. The setting is the Lp spaces of matrix-valued functions on locally compact groups. The focus is on the spectra and eigenspaces of convolution operators on these spaces, defined by matrix-valued measures. Among various spectral results, the L2-spectrum of such an operator is completely determined and as an application, the spectrum of a discrete Laplacian on a homogeneous graph is computed using this result. The contractivity properties of matrix convolution semigroups are studied and applications to harmonic functions on Lie groups and Riemannian symmetric spaces are discussed. An interesting feature is the presence of Jordan algebraic structures in matrix-harmonic functions.

  9. A REMARK ON CERTAIN CONVOLUTION OPERATOR

    Institute of Scientific and Technical Information of China (English)

    刘金林

    1993-01-01

    A certain operator D(a+p-1) defined by convolutions (or Hadamard products) is introduced. The object of this paper is to give an application of the convolution operator D(a+p-1) to the differential inequalities.

  10. A Shortest Dependency Path Based Convolutional Neural Network for Protein-Protein Relation Extraction

    OpenAIRE

    2016-01-01

    The state-of-the-art methods for protein-protein interaction (PPI) extraction are primarily based on kernel methods, and their performances strongly depend on the handcraft features. In this paper, we tackle PPI extraction by using convolutional neural networks (CNN) and propose a shortest dependency path based CNN (sdpCNN) model. The proposed method (1) only takes the sdp and word embedding as input and (2) could avoid bias from feature selection by using CNN. We performed experiments on sta...

  11. Fast Algorithms for Convolutional Neural Networks

    OpenAIRE

    Lavin, Andrew; Gray, Scott

    2015-01-01

    Deep convolutional neural networks take GPU days of compute time to train on large data sets. Pedestrian detection for self driving cars requires very low latency. Image recognition for mobile phones is constrained by limited processing resources. The success of convolutional neural networks in these situations is limited by how fast we can compute them. Conventional FFT based convolution is fast for large filters, but state of the art convolutional neural networks use small, 3x3 filters. We ...

  12. Optimized Kernel Entropy Components.

    Science.gov (United States)

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2016-02-25

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  13. Engineering Multirate Convolutions for Radar Imaging

    NARCIS (Netherlands)

    Bierens, L.H.J.; Deprettere, E.F.

    1996-01-01

    We present a schematic design methodology for multirate convolution systems, based on combined algorithmic development and architecture design. It allows us to map the algebraic specification of a long convolution algorithm directly onto efficient fast convolution hardware based on short FFT process

  14. Fast 2D Convolutions and Cross-Correlations Using Scalable Architectures.

    Science.gov (United States)

    Carranza, Cesar; Llamocca, Daniel; Pattichis, Marios

    2017-05-01

    The manuscript describes fast and scalable architectures and associated algorithms for computing convolutions and cross-correlations. The basic idea is to map 2D convolutions and cross-correlations to a collection of 1D convolutions and cross-correlations in the transform domain. This is accomplished through the use of the discrete periodic radon transform for general kernels and the use of singular value decomposition -LU decompositions for low-rank kernels. The approach uses scalable architectures that can be fitted into modern FPGA and Zynq-SOC devices. Based on different types of available resources, for P×P blocks, 2D convolutions and cross-correlations can be computed in just O(P) clock cycles up to O(P(2)) clock cycles. Thus, there is a trade-off between performance and required numbers and types of resources. We provide implementations of the proposed architectures using modern programmable devices (Virtex-7 and Zynq-SOC). Based on the amounts and types of required resources, we show that the proposed approaches significantly outperform current methods.

  15. Reed-Solomon convolutional codes

    NARCIS (Netherlands)

    Gluesing-Luerssen, H; Schmale, W

    2005-01-01

    In this paper we will introduce a specific class of cyclic convolutional codes. The construction is based on Reed-Solomon block codes. The algebraic parameters as well as the distance of these codes are determined. This shows that some of these codes are optimal or near optimal.

  16. Deconvolution of a linear combination of Gaussian kernels by an inhomogeneous Fredholm integral equation of second kind and applications to image processing

    CERN Document Server

    Ulmer, Waldemar

    2011-01-01

    Scatter processes of photons lead to blurring of images. Multiple scatter can usually be described by one Gaussian convolution kernel. This can be a crude approximation and we need a linear combination of 2/3 Gaussian kernels to account for tails.If image structures are recorded by appropriate measurements, these structures are always blurred. The ideal image (source function without any blurring) is subjected to Gaussian convolutions to yield a blurred image, which is recorded by a detector array. The inverse problem of this procedure is the determination of the ideal source image from really determined image. If the scatter parameters are known, we are able to calculate the idealistic source structure by a deconvolution. We shall extend it to linear combinations of two/three Gaussian convolution kernels in order to found applications to aforementioned image processing, where a single Gaussian kernel would be crude. In this communication, we shall derive a new deconvolution method for a linear combination of...

  17. Regularization in kernel learning

    CERN Document Server

    Mendelson, Shahar; 10.1214/09-AOS728

    2010-01-01

    Under mild assumptions on the kernel, we obtain the best known error rates in a regularized learning scenario taking place in the corresponding reproducing kernel Hilbert space (RKHS). The main novelty in the analysis is a proof that one can use a regularization term that grows significantly slower than the standard quadratic growth in the RKHS norm.

  18. Iterative software kernels

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  19. Kernel Affine Projection Algorithms

    Directory of Open Access Journals (Sweden)

    José C. Príncipe

    2008-05-01

    Full Text Available The combination of the famed kernel trick and affine projection algorithms (APAs yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS. KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS, and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.

  20. Kernel Affine Projection Algorithms

    Science.gov (United States)

    Liu, Weifeng; Príncipe, José C.

    2008-12-01

    The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.

  1. Data convolution and combination operation (COCOA) for motion ghost artifacts reduction.

    Science.gov (United States)

    Huang, Feng; Lin, Wei; Börnert, Peter; Li, Yu; Reykowski, Arne

    2010-07-01

    A novel method, data convolution and combination operation, is introduced for the reduction of ghost artifacts due to motion or flow during data acquisition. Since neighboring k-space data points from different coil elements have strong correlations, a new "synthetic" k-space with dispersed motion artifacts can be generated through convolution for each coil. The corresponding convolution kernel can be self-calibrated using the acquired k-space data. The synthetic and the acquired data sets can be checked for consistency to identify k-space areas that are motion corrupted. Subsequently, these two data sets can be combined appropriately to produce a k-space data set showing a reduced level of motion induced error. If the acquired k-space contains isolated error, the error can be completely eliminated through data convolution and combination operation. If the acquired k-space data contain widespread errors, the application of the convolution also significantly reduces the overall error. Results with simulated and in vivo data demonstrate that this self-calibrated method robustly reduces ghost artifacts due to swallowing, breathing, or blood flow, with a minimum impact on the image signal-to-noise ratio. (c) 2010 Wiley-Liss, Inc.

  2. Quantitative evaluation of convolution-based methods for medical image interpolation.

    Science.gov (United States)

    Meijering, E H; Niessen, W J; Viergever, M A

    2001-06-01

    Interpolation is required in a variety of medical image processing applications. Although many interpolation techniques are known from the literature, evaluations of these techniques for the specific task of applying geometrical transformations to medical images are still lacking. In this paper we present such an evaluation. We consider convolution-based interpolation methods and rigid transformations (rotations and translations). A large number of sinc-approximating kernels are evaluated, including piecewise polynomial kernels and a large number of windowed sinc kernels, with spatial supports ranging from two to ten grid intervals. In the evaluation we use images from a wide variety of medical image modalities. The results show that spline interpolation is to be preferred over all other methods, both for its accuracy and its relatively low computational cost.

  3. Kernels in circulant digraphs

    Directory of Open Access Journals (Sweden)

    R. Lakshmi

    2014-06-01

    Full Text Available A kernel $J$ of a digraph $D$ is an independent set of vertices of $D$ such that for every vertex $w,in,V(D,setminus,J$ there exists an arc from $w$ to a vertex in $J.$ In this paper, among other results, a characterization of $2$-regular circulant digraph having a kernel is obtained. This characterization is a partial solution to the following problem: Characterize circulant digraphs which have kernels; it appeared in the book {it Digraphs - theory, algorithms and applications}, Second Edition, Springer-Verlag, 2009, by J. Bang-Jensen and G. Gutin.

  4. Kernels for structured data

    CERN Document Server

    Gärtner, Thomas

    2009-01-01

    This book provides a unique treatment of an important area of machine learning and answers the question of how kernel methods can be applied to structured data. Kernel methods are a class of state-of-the-art learning algorithms that exhibit excellent learning results in several application domains. Originally, kernel methods were developed with data in mind that can easily be embedded in a Euclidean vector space. Much real-world data does not have this property but is inherently structured. An example of such data, often consulted in the book, is the (2D) graph structure of molecules formed by

  5. SOME ASYMPTOTIC PROPERTIES OF THE CONVOLUTION TRANSFORMS OF FRACTAL MEASURES

    Institute of Scientific and Technical Information of China (English)

    Cao Li

    2012-01-01

    We study the asymptotic behavior near the boundary of u(x,y) =Ky * μ (x),defined on the half-space R+ × RN by the convolution of an approximate identity Ky (.) (y >0) and a measure μ on RN.The Poisson and the heat kernel are unified as special cases in our setting.We are mainly interested in the relationship between the rate of growth at boundary of u and the s-density of a singular measure μ.Then a boundary limit theorem of Fatou's type for singular measures is proved.Meanwhile,the asymptotic behavior of a quotient of Kμ and Kv is also studied,then the corresponding Fatou-Doob's boundary relative limit is obtained.In particular,some results about the singular boundary behavior of harmonic and heat functions can be deduced simultaneously from ours.At the end,an application in fractal geometry is given.

  6. Locally linear approximation for Kernel methods : the Railway Kernel

    OpenAIRE

    Muñoz, Alberto; González, Javier

    2008-01-01

    In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capability of the pr...

  7. Linux Kernel in a Nutshell

    CERN Document Server

    Kroah-Hartman, Greg

    2009-01-01

    Linux Kernel in a Nutshell covers the entire range of kernel tasks, starting with downloading the source and making sure that the kernel is in sync with the versions of the tools you need. In addition to configuration and installation steps, the book offers reference material and discussions of related topics such as control of kernel options at runtime.

  8. Data-variant kernel analysis

    CERN Document Server

    Motai, Yuichi

    2015-01-01

    Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include

  9. Mixture Density Mercer Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — We present a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian mixture...

  10. Convolutional neural network architectures for predicting DNA–protein binding

    Science.gov (United States)

    Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.

    2016-01-01

    Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608

  11. Fluence-convolution broad-beam (FCBB) dose calculation.

    Science.gov (United States)

    Lu, Weiguo; Chen, Mingli

    2010-12-07

    IMRT optimization requires a fast yet relatively accurate algorithm to calculate the iteration dose with small memory demand. In this paper, we present a dose calculation algorithm that approaches these goals. By decomposing the infinitesimal pencil beam (IPB) kernel into the central axis (CAX) component and lateral spread function (LSF) and taking the beam's eye view (BEV), we established a non-voxel and non-beamlet-based dose calculation formula. Both LSF and CAX are determined by a commissioning procedure using the collapsed-cone convolution/superposition (CCCS) method as the standard dose engine. The proposed dose calculation involves a 2D convolution of a fluence map with LSF followed by ray tracing based on the CAX lookup table with radiological distance and divergence correction, resulting in complexity of O(N(3)) both spatially and temporally. This simple algorithm is orders of magnitude faster than the CCCS method. Without pre-calculation of beamlets, its implementation is also orders of magnitude smaller than the conventional voxel-based beamlet-superposition (VBS) approach. We compared the presented algorithm with the CCCS method using simulated and clinical cases. The agreement was generally within 3% for a homogeneous phantom and 5% for heterogeneous and clinical cases. Combined with the 'adaptive full dose correction', the algorithm is well suitable for calculating the iteration dose during IMRT optimization.

  12. An optimal nonorthogonal separation of the anisotropic Gaussian convolution filter.

    Science.gov (United States)

    Lampert, Christoph H; Wirjadi, Oliver

    2006-11-01

    We give an analytical and geometrical treatment of what it means to separate a Gaussian kernel along arbitrary axes in R(n), and we present a separation scheme that allows us to efficiently implement anisotropic Gaussian convolution filters for data of arbitrary dimensionality. Based on our previous analysis we show that this scheme is optimal with regard to the number of memory accesses and interpolation operations needed. The proposed method relies on nonorthogonal convolution axes and works completely in image space. Thus, it avoids the need for a fast Fourier transform (FFT)-subroutine. Depending on the accuracy and speed requirements, different interpolation schemes and methods to implement the one-dimensional Gaussian (finite impulse response and infinite impulse response) can be integrated. Special emphasis is put on analyzing the performance and accuracy of the new method. In particular, we show that without any special optimization of the source code, it can perform anisotropic Gaussian filtering faster than methods relying on the FFT.

  13. Remarks on kernel Bayes' rule

    OpenAIRE

    Johno, Hisashi; Nakamoto, Kazunori; Saigo, Tatsuhiko

    2015-01-01

    Kernel Bayes' rule has been proposed as a nonparametric kernel-based method to realize Bayesian inference in reproducing kernel Hilbert spaces. However, we demonstrate both theoretically and experimentally that the prediction result by kernel Bayes' rule is in some cases unnatural. We consider that this phenomenon is in part due to the fact that the assumptions in kernel Bayes' rule do not hold in general.

  14. Linearized Kernel Dictionary Learning

    Science.gov (United States)

    Golts, Alona; Elad, Michael

    2016-06-01

    In this paper we present a new approach of incorporating kernels into dictionary learning. The kernel K-SVD algorithm (KKSVD), which has been introduced recently, shows an improvement in classification performance, with relation to its linear counterpart K-SVD. However, this algorithm requires the storage and handling of a very large kernel matrix, which leads to high computational cost, while also limiting its use to setups with small number of training examples. We address these problems by combining two ideas: first we approximate the kernel matrix using a cleverly sampled subset of its columns using the Nystr\\"{o}m method; secondly, as we wish to avoid using this matrix altogether, we decompose it by SVD to form new "virtual samples," on which any linear dictionary learning can be employed. Our method, termed "Linearized Kernel Dictionary Learning" (LKDL) can be seamlessly applied as a pre-processing stage on top of any efficient off-the-shelf dictionary learning scheme, effectively "kernelizing" it. We demonstrate the effectiveness of our method on several tasks of both supervised and unsupervised classification and show the efficiency of the proposed scheme, its easy integration and performance boosting properties.

  15. GPU-accelerated non-uniform fast Fourier transform-based compressive sensing spectral domain optical coherence tomography.

    Science.gov (United States)

    Xu, Daguang; Huang, Yong; Kang, Jin U

    2014-06-16

    We implemented the graphics processing unit (GPU) accelerated compressive sensing (CS) non-uniform in k-space spectral domain optical coherence tomography (SD OCT). Kaiser-Bessel (KB) function and Gaussian function are used independently as the convolution kernel in the gridding-based non-uniform fast Fourier transform (NUFFT) algorithm with different oversampling ratios and kernel widths. Our implementation is compared with the GPU-accelerated modified non-uniform discrete Fourier transform (MNUDFT) matrix-based CS SD OCT and the GPU-accelerated fast Fourier transform (FFT)-based CS SD OCT. It was found that our implementation has comparable performance to the GPU-accelerated MNUDFT-based CS SD OCT in terms of image quality while providing more than 5 times speed enhancement. When compared to the GPU-accelerated FFT based-CS SD OCT, it shows smaller background noise and less side lobes while eliminating the need for the cumbersome k-space grid filling and the k-linear calibration procedure. Finally, we demonstrated that by using a conventional desktop computer architecture having three GPUs, real-time B-mode imaging can be obtained in excess of 30 fps for the GPU-accelerated NUFFT based CS SD OCT with frame size 2048(axial) × 1,000(lateral).

  16. Convolutive Blind Source Separation Methods

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Larsen, Jan; Kjems, Ulrik

    2008-01-01

    During the past decades, much attention has been given to the separation of mixed sources, in particular for the blind case where both the sources and the mixing process are unknown and only recordings of the mixtures are available. In several situations it is desirable to recover all sources from....... This may help practitioners and researchers new to the area of convolutive source separation obtain a complete overview of the field. Hopefully those with more experience in the field can identify useful tools, or find inspiration for new algorithms....

  17. Commutators of Integral Operators with Variable Kernels on Hardy Spaces

    Indian Academy of Sciences (India)

    Pu Zhang; Kai Zhao

    2005-11-01

    Let $T_{,}(0≤ < n)$ be the singular and fractional integrals with variable kernel $(x,z)$, and $[b, T_{,}]$ be the commutator generated by $T_{,}$ and a Lipschitz function . In this paper, the authors study the boundedness of $[b, T_{,}]$ on the Hardy spaces, under some assumptions such as the $L^r$-Dini condition. Similar results and the weak type estimates at the end-point cases are also given for the homogeneous convolution operators $T_{\\overline{},}(0≤ < n)$. The smoothness conditions imposed on $\\overline{}$ are weaker than the corresponding known results.

  18. Contingent kernel density estimation.

    Directory of Open Access Journals (Sweden)

    Scott Fortmann-Roe

    Full Text Available Kernel density estimation is a widely used method for estimating a distribution based on a sample of points drawn from that distribution. Generally, in practice some form of error contaminates the sample of observed points. Such error can be the result of imprecise measurements or observation bias. Often this error is negligible and may be disregarded in analysis. In cases where the error is non-negligible, estimation methods should be adjusted to reduce resulting bias. Several modifications of kernel density estimation have been developed to address specific forms of errors. One form of error that has not yet been addressed is the case where observations are nominally placed at the centers of areas from which the points are assumed to have been drawn, where these areas are of varying sizes. In this scenario, the bias arises because the size of the error can vary among points and some subset of points can be known to have smaller error than another subset or the form of the error may change among points. This paper proposes a "contingent kernel density estimation" technique to address this form of error. This new technique adjusts the standard kernel on a point-by-point basis in an adaptive response to changing structure and magnitude of error. In this paper, equations for our contingent kernel technique are derived, the technique is validated using numerical simulations, and an example using the geographic locations of social networking users is worked to demonstrate the utility of the method.

  19. Incomplete convolutions in production and inventory models

    NARCIS (Netherlands)

    Houtum, van G.J.; Zijm, W.H.M.

    1997-01-01

    In this paper, we study incomplete convolutions of continuous distribution functions, as they appear in the analysis of (multi-stage) production and inventory systems. Three example systems are discussed where these incomplete convolutions naturally arise. We derive explicit, nonrecursive formulae f

  20. Independent Component Analysis in a convoluted world

    DEFF Research Database (Denmark)

    Dyrholm, Mads

    2006-01-01

    instantaneousICA, then select a physiologically interesting subspace, then remove the delayed temporal dependencies among the instantaneous ICA components by using convolutive ICA. By Bayesian model selection, in a real world EEG data set, it is shown that convolutive ICA is a better model for EEG than...

  1. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: 'Are we actually dealing with a convolutive mixture?'. We try to answer this question for EEG data....

  2. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  3. Revision of the theory of tracer transport and the convolution model of dynamic contrast enhanced magnetic resonance imaging.

    Science.gov (United States)

    Keeling, Stephen L; Bammer, Roland; Stollberger, Rudolf

    2007-09-01

    Counterexamples are used to motivate the revision of the established theory of tracer transport. Then dynamic contrast enhanced magnetic resonance imaging in particular is conceptualized in terms of a fully distributed convection-diffusion model from which a widely used convolution model is derived using, alternatively, compartmental discretizations or semigroup theory. On this basis, applications and limitations of the convolution model are identified. For instance, it is proved that perfusion and tissue exchange states cannot be identified on the basis of a single convolution equation alone. Yet under certain assumptions, particularly that flux is purely convective at the boundary of a tissue region, physiological parameters such as mean transit time, effective volume fraction, and volumetric flow rate per unit tissue volume can be deduced from the kernel.

  4. Multidimensional kernel estimation

    CERN Document Server

    Milosevic, Vukasin

    2015-01-01

    Kernel estimation is one of the non-parametric methods used for estimation of probability density function. Its first ROOT implementation, as part of RooFit package, has one major issue, its evaluation time is extremely slow making in almost unusable. The goal of this project was to create a new class (TKNDTree) which will follow the original idea of kernel estimation, greatly improve the evaluation time (using the TKTree class for storing the data and creating different user-controlled modes of evaluation) and add the interpolation option, for 2D case, with the help of the new Delaunnay2D class.

  5. for palm kernel oil extraction

    African Journals Online (AJOL)

    user

    OEE), ... designed (CRD) experimental approach with 4 factor levels and 2 replications was used to determine the effect of kernel .... palm kernels in either a continuous or batch mode ... are fed through the hopper; the screw conveys, crushes,.

  6. Brain tumor grading based on Neural Networks and Convolutional Neural Networks.

    Science.gov (United States)

    Yuehao Pan; Weimin Huang; Zhiping Lin; Wanzheng Zhu; Jiayin Zhou; Wong, Jocelyn; Zhongxiang Ding

    2015-08-01

    This paper studies brain tumor grading using multiphase MRI images and compares the results with various configurations of deep learning structure and baseline Neural Networks. The MRI images are used directly into the learning machine, with some combination operations between multiphase MRIs. Compared to other researches, which involve additional effort to design and choose feature sets, the approach used in this paper leverages the learning capability of deep learning machine. We present the grading performance on the testing data measured by the sensitivity and specificity. The results show a maximum improvement of 18% on grading performance of Convolutional Neural Networks based on sensitivity and specificity compared to Neural Networks. We also visualize the kernels trained in different layers and display some self-learned features obtained from Convolutional Neural Networks.

  7. Kernel bundle EPDiff

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Lauze, Francois Bernard; Nielsen, Mads

    2011-01-01

    In the LDDMM framework, optimal warps for image registration are found as end-points of critical paths for an energy functional, and the EPDiff equations describe the evolution along such paths. The Large Deformation Diffeomorphic Kernel Bundle Mapping (LDDKBM) extension of LDDMM allows scale space...

  8. Multivariate realised kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement error of certain types and can also handle non-synchronous trading. It is the first estimator...

  9. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  10. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  11. Cygrid: Cython-powered convolution-based gridding module for Python

    Science.gov (United States)

    Winkel, B.; Lenz, D.; Flöer, L.

    2016-06-01

    The Python module Cygrid grids (resamples) data to any collection of spherical target coordinates, although its typical application involves FITS maps or data cubes. The module supports the FITS world coordinate system (WCS) standard; its underlying algorithm is based on the convolution of the original samples with a 2D Gaussian kernel. A lookup table scheme allows parallelization of the code and is combined with the HEALPix tessellation of the sphere for fast neighbor searches. Cygrid's runtime scales between O(n) and O(nlog n), with n being the number of input samples.

  12. Robust and accurate transient light transport decomposition via convolutional sparse coding.

    Science.gov (United States)

    Hu, Xuemei; Deng, Yue; Lin, Xing; Suo, Jinli; Dai, Qionghai; Barsi, Christopher; Raskar, Ramesh

    2014-06-01

    Ultrafast sources and detectors have been used to record the time-resolved scattering of light propagating through macroscopic scenes. In the context of computational imaging, decomposition of this transient light transport (TLT) is useful for applications, such as characterizing materials, imaging through diffuser layers, and relighting scenes dynamically. Here, we demonstrate a method of convolutional sparse coding to decompose TLT into direct reflections, inter-reflections, and subsurface scattering. The method relies on the sparsity composition of the time-resolved kernel. We show that it is robust and accurate to noise during the acquisition process.

  13. Excursion sets of infinitely divisible random fields with convolution equivalent Lévy measure

    DEFF Research Database (Denmark)

    Rønn-Nielsen, Anders; Jensen, Eva B. Vedel

    We consider a continuous, infinitely divisible random field in R d , d = 1, 2, 3, given as an integral of a kernel function with respect to a Lévy basis with convolution equivalent Lévy measure. For a large class of such random fields we compute the asymptotic probability that the excursion set a...... at level x contains some rotation of an object with fixed radius as x → ∞. Our main result is that the asymptotic probability is equivalent to the right tail of the underlying Lévy measure...

  14. Convolutional Neural Network for Image Recognition

    CERN Document Server

    Seifnashri, Sahand

    2015-01-01

    The aim of this project is to use machine learning techniques especially Convolutional Neural Networks for image processing. These techniques can be used for Quark-Gluon discrimination using calorimeters data, but unfortunately I didn’t manage to get the calorimeters data and I just used the Jet data fromminiaodsim(ak4 chs). The Jet data was not good enough for Convolutional Neural Network which is designed for ’image’ recognition. This report is made of twomain part, part one is mainly about implementing Convolutional Neural Network on unphysical data such as MNIST digits and CIFAR-10 dataset and part 2 is about the Jet data.

  15. Viscosity kernel of molecular fluids

    DEFF Research Database (Denmark)

    Puscasu, Ruslan; Todd, Billy; Daivis, Peter

    2010-01-01

    , temperature, and chain length dependencies of the reciprocal and real-space viscosity kernels are presented. We find that the density has a major effect on the shape of the kernel. The temperature range and chain lengths considered here have by contrast less impact on the overall normalized shape. Functional...... forms that fit the wave-vector-dependent kernel data over a large density and wave-vector range have also been tested. Finally, a structural normalization of the kernels in physical space is considered. Overall, the real-space viscosity kernel has a width of roughly 3–6 atomic diameters, which means...

  16. A Shortest Dependency Path Based Convolutional Neural Network for Protein-Protein Relation Extraction

    Directory of Open Access Journals (Sweden)

    Lei Hua

    2016-01-01

    Full Text Available The state-of-the-art methods for protein-protein interaction (PPI extraction are primarily based on kernel methods, and their performances strongly depend on the handcraft features. In this paper, we tackle PPI extraction by using convolutional neural networks (CNN and propose a shortest dependency path based CNN (sdpCNN model. The proposed method (1 only takes the sdp and word embedding as input and (2 could avoid bias from feature selection by using CNN. We performed experiments on standard Aimed and BioInfer datasets, and the experimental results demonstrated that our approach outperformed state-of-the-art kernel based methods. In particular, by tracking the sdpCNN model, we find that sdpCNN could extract key features automatically and it is verified that pretrained word embedding is crucial in PPI task.

  17. A Shortest Dependency Path Based Convolutional Neural Network for Protein-Protein Relation Extraction.

    Science.gov (United States)

    Hua, Lei; Quan, Chanqin

    2016-01-01

    The state-of-the-art methods for protein-protein interaction (PPI) extraction are primarily based on kernel methods, and their performances strongly depend on the handcraft features. In this paper, we tackle PPI extraction by using convolutional neural networks (CNN) and propose a shortest dependency path based CNN (sdpCNN) model. The proposed method (1) only takes the sdp and word embedding as input and (2) could avoid bias from feature selection by using CNN. We performed experiments on standard Aimed and BioInfer datasets, and the experimental results demonstrated that our approach outperformed state-of-the-art kernel based methods. In particular, by tracking the sdpCNN model, we find that sdpCNN could extract key features automatically and it is verified that pretrained word embedding is crucial in PPI task.

  18. Multiple Kernel Point Set Registration.

    Science.gov (United States)

    Nguyen, Thanh Minh; Wu, Q M Jonathan

    2016-06-01

    The finite Gaussian mixture model with kernel correlation is a flexible tool that has recently received attention for point set registration. While there are many algorithms for point set registration presented in the literature, an important issue arising from these studies concerns the mapping of data with nonlinear relationships and the ability to select a suitable kernel. Kernel selection is crucial for effective point set registration. We focus here on multiple kernel point set registration. We make several contributions in this paper. First, each observation is modeled using the Student's t-distribution, which is heavily tailed and more robust than the Gaussian distribution. Second, by automatically adjusting the kernel weights, the proposed method allows us to prune the ineffective kernels. This makes the choice of kernels less crucial. After parameter learning, the kernel saliencies of the irrelevant kernels go to zero. Thus, the choice of kernels is less crucial and it is easy to include other kinds of kernels. Finally, we show empirically that our model outperforms state-of-the-art methods recently proposed in the literature.

  19. PROPERTIES OF THE CONVOLUTION WITH PRESTARLIKE FUNCTIONS

    Institute of Scientific and Technical Information of China (English)

    Jacek DZIOK

    2013-01-01

    In the paper we investigate convolution properties related to the prestarlike functions and various inclusion relationships between defined classes of functions. Interest-ing applications involving the well-known classes of functions defined by linear operators are also considered.

  20. Inf-convolution of G-expectations

    Institute of Scientific and Technical Information of China (English)

    BUCKDAHN; Rainer

    2010-01-01

    In this paper we will discuss the optimal risk transfer problems when risk measures are generated by G-expectations,and we present the relationship between inf-convolution of G-expectations and the infconvolution of drivers G.

  1. A comprehensive benchmark of kernel methods to extract protein-protein interactions from literature.

    Science.gov (United States)

    Tikk, Domonkos; Thomas, Philippe; Palaga, Peter; Hakenberg, Jörg; Leser, Ulf

    2010-07-01

    The most important way of conveying new findings in biomedical research is scientific publication. Extraction of protein-protein interactions (PPIs) reported in scientific publications is one of the core topics of text mining in the life sciences. Recently, a new class of such methods has been proposed - convolution kernels that identify PPIs using deep parses of sentences. However, comparing published results of different PPI extraction methods is impossible due to the use of different evaluation corpora, different evaluation metrics, different tuning procedures, etc. In this paper, we study whether the reported performance metrics are robust across different corpora and learning settings and whether the use of deep parsing actually leads to an increase in extraction quality. Our ultimate goal is to identify the one method that performs best in real-life scenarios, where information extraction is performed on unseen text and not on specifically prepared evaluation data. We performed a comprehensive benchmarking of nine different methods for PPI extraction that use convolution kernels on rich linguistic information. Methods were evaluated on five different public corpora using cross-validation, cross-learning, and cross-corpus evaluation. Our study confirms that kernels using dependency trees generally outperform kernels based on syntax trees. However, our study also shows that only the best kernel methods can compete with a simple rule-based approach when the evaluation prevents information leakage between training and test corpora. Our results further reveal that the F-score of many approaches drops significantly if no corpus-specific parameter optimization is applied and that methods reaching a good AUC score often perform much worse in terms of F-score. We conclude that for most kernels no sensible estimation of PPI extraction performance on new text is possible, given the current heterogeneity in evaluation data. Nevertheless, our study shows that three

  2. A comprehensive benchmark of kernel methods to extract protein-protein interactions from literature.

    Directory of Open Access Journals (Sweden)

    Domonkos Tikk

    Full Text Available The most important way of conveying new findings in biomedical research is scientific publication. Extraction of protein-protein interactions (PPIs reported in scientific publications is one of the core topics of text mining in the life sciences. Recently, a new class of such methods has been proposed - convolution kernels that identify PPIs using deep parses of sentences. However, comparing published results of different PPI extraction methods is impossible due to the use of different evaluation corpora, different evaluation metrics, different tuning procedures, etc. In this paper, we study whether the reported performance metrics are robust across different corpora and learning settings and whether the use of deep parsing actually leads to an increase in extraction quality. Our ultimate goal is to identify the one method that performs best in real-life scenarios, where information extraction is performed on unseen text and not on specifically prepared evaluation data. We performed a comprehensive benchmarking of nine different methods for PPI extraction that use convolution kernels on rich linguistic information. Methods were evaluated on five different public corpora using cross-validation, cross-learning, and cross-corpus evaluation. Our study confirms that kernels using dependency trees generally outperform kernels based on syntax trees. However, our study also shows that only the best kernel methods can compete with a simple rule-based approach when the evaluation prevents information leakage between training and test corpora. Our results further reveal that the F-score of many approaches drops significantly if no corpus-specific parameter optimization is applied and that methods reaching a good AUC score often perform much worse in terms of F-score. We conclude that for most kernels no sensible estimation of PPI extraction performance on new text is possible, given the current heterogeneity in evaluation data. Nevertheless, our study

  3. Gradient Flow Convolutive Blind Source Separation

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Nielsen, Chinton Møller

    2004-01-01

    Experiments have shown that the performance of instantaneous gradient flow beamforming by Cauwenberghs et al. is reduced significantly in reverberant conditions. By expanding the gradient flow principle to convolutive mixtures, separation in a reverberant environment is possible. By use of a circ......Experiments have shown that the performance of instantaneous gradient flow beamforming by Cauwenberghs et al. is reduced significantly in reverberant conditions. By expanding the gradient flow principle to convolutive mixtures, separation in a reverberant environment is possible. By use...

  4. SU-E-T-154: Calculation of Tissue Dose Point Kernels Using GATE Monte Carlo Simulation Toolkit to Compare with Water Dose Point Kernel

    Energy Technology Data Exchange (ETDEWEB)

    Khazaee, M [shahid beheshti university, Tehran, Tehran (Iran, Islamic Republic of); Asl, A Kamali [Shahid Beheshti University, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of); Geramifar, P [Shariati Hospital, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of)

    2015-06-15

    Purpose: the objective of this study was to assess utilizing water dose point kernel (DPK)instead of tissue dose point kernels in convolution algorithms.to the best of our knowledge, in providing 3D distribution of absorbed dose from a 3D distribution of the activity, the human body is considered equivalent to water. as a Result tissue variations are not considered in patient specific dosimetry. Methods: In this study Gate v7.0 was used to calculate tissue dose point kernel. the beta emitter radionuclides which have taken into consideration in this simulation include Y-90, Lu-177 and P-32 which are commonly used in nuclear medicine. the comparison has been performed for dose point kernels of adipose, bone, breast, heart, intestine, kidney, liver, lung and spleen versus water dose point kernel. Results: In order to validate the simulation the Result of 90Y DPK in water were compared with published results of Papadimitroulas et al (Med. Phys., 2012). The results represented that the mean differences between water DPK and other soft tissues DPKs range between 0.6 % and 1.96% for 90Y, except for lung and bone, where the observed discrepancies are 6.3% and 12.19% respectively. The range of DPK difference for 32P is between 1.74% for breast and 18.85% for bone. For 177Lu, the highest difference belongs to bone which is equal to 16.91%. For other soft tissues the least discrepancy is observed in kidney with 1.68%. Conclusion: In all tissues except for lung and bone, the results of GATE for dose point kernel were comparable to water dose point kernel which demonstrates the appropriateness of applying water dose point kernel instead of soft tissues in the field of nuclear medicine.

  5. Testing Monotonicity of Pricing Kernels

    OpenAIRE

    Timofeev, Roman

    2007-01-01

    In this master thesis a mechanism to test mononicity of empirical pricing kernels (EPK) is presented. By testing monotonicity of pricing kernel we can determine whether utility function is concave or not. Strictly decreasing pricing kernel corresponds to concave utility function while non-decreasing EPK means that utility function contains some non-concave regions. Risk averse behavior is usually described by concave utility function and considered to be a cornerstone of classical behavioral ...

  6. 7 CFR 51.1415 - Inedible kernels.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or...

  7. 7 CFR 981.8 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel,...

  8. 7 CFR 981.7 - Edible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible....

  9. 7 CFR 981.408 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored...

  10. Clustering via Kernel Decomposition

    DEFF Research Database (Denmark)

    Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan

    2006-01-01

    Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....

  11. Volterra series truncation and kernel estimation of nonlinear systems in the frequency domain

    Science.gov (United States)

    Zhang, B.; Billings, S. A.

    2017-02-01

    The Volterra series model is a direct generalisation of the linear convolution integral and is capable of displaying the intrinsic features of a nonlinear system in a simple and easy to apply way. Nonlinear system analysis using Volterra series is normally based on the analysis of its frequency-domain kernels and a truncated description. But the estimation of Volterra kernels and the truncation of Volterra series are coupled with each other. In this paper, a novel complex-valued orthogonal least squares algorithm is developed. The new algorithm provides a powerful tool to determine which terms should be included in the Volterra series expansion and to estimate the kernels and thus solves the two problems all together. The estimated results are compared with those determined using the analytical expressions of the kernels to validate the method. To further evaluate the effectiveness of the method, the physical parameters of the system are also extracted from the measured kernels. Simulation studies demonstrates that the new approach not only can truncate the Volterra series expansion and estimate the kernels of a weakly nonlinear system, but also can indicate the applicability of the Volterra series analysis in a severely nonlinear system case.

  12. A multiple circular path convolution neural network system for detection of mammographic masses.

    Science.gov (United States)

    Lo, Shih-Chung B; Li, Huai; Wang, Yue; Kinnard, Lisa; Freedman, Matthew T

    2002-02-01

    A multiple circular path convolution neural network (MCPCNN) architecture specifically designed for the analysis of tumor and tumor-like structures has been constructed. We first divided each suspected tumor area into sectors and computed the defined mass features for each sector independently. These sector features were used on the input layer and were coordinated by convolution kernels of different sizes that propagated signals to the second layer in the neural network system. The convolution kernels were trained, as required, by presenting the training cases to the neural network. In this study, randomly selected mammograms were processed by a dual morphological enhancement technique. Radiodense areas were isolated and were delineated using a region growing algorithm. The boundary of each region of interest was then divided into 36 sectors using 36 equi-angular dividers radiated from the center of the region. A total of 144 Breast Imaging-Reporting and Data System-based features (i.e., four features per sector for 36 sectors) were computed as input values for the evaluation of this newly invented neural network system. The overall performance was 0.78-0.80 for the areas (Az) under the receiver operating characteristic curves using the conventional feed-forward neural network in the detection of mammographic masses. The performance was markedly improved with Az values ranging from 0.84 to 0.89 using the MCPCNN. This paper does not intend to claim the best mass detection system. Instead it reports a potentially better neural network structure for analyzing a set of the mass features defined by an investigator.

  13. Implementation of FFT convolution and multigrid superposition models in the FOCUS RTP system

    Science.gov (United States)

    Miften, Moyed; Wiesmeyer, Mark; Monthofer, Suzanne; Krippner, Ken

    2000-04-01

    In radiotherapy treatment planning, convolution/superposition algorithms currently represent the best practical approach for accurate photon dose calculation in heterogeneous tissues. In this work, the implementation, accuracy and performance of the FFT convolution (FFTC) and multigrid superposition (MGS) algorithms are presented. The FFTC and MGS models use the same `TERMA' calculation and are commissioned using the same parameters. Both models use the same spectra, incorporate the same off-axis softening and base incident lateral fluence on the same measurements. In addition, corrections are explicitly applied to the polyenergetic and parallel kernel approximations, and electron contamination is modelled. Spectra generated by Monte Carlo (MC) modelling of treatment heads are used. Calculations using the MC spectra were in excellent agreement with measurements for many linear accelerator types. To speed up the calculations, a number of calculation techniques were implemented, including separate primary and scatter dose calculation, the FFT technique which assumes kernel invariance for the convolution calculation and a multigrid (MG) acceleration technique for the superposition calculation. Timing results show that the FFTC model is faster than MGS by a factor of 4 and 8 for small and large field sizes, respectively. Comparisons with measured data and BEAM MC results for a wide range of clinical beam setups show that (a) FFTC and MGS doses match measurements to better than 2% or 2 mm in homogeneous media; (b) MGS is more accurate than FFTC in lung phantoms where MGS doses are within 3% or 3 mm of BEAM results and (c) FFTC overestimates the dose in lung by a maximum of 9% compared to BEAM.

  14. Kernel Phase and Kernel Amplitude in Fizeau Imaging

    CERN Document Server

    Pope, Benjamin J S

    2016-01-01

    Kernel phase interferometry is an approach to high angular resolution imaging which enhances the performance of speckle imaging with adaptive optics. Kernel phases are self-calibrating observables that generalize the idea of closure phases from non-redundant arrays to telescopes with arbitrarily shaped pupils, by considering a matrix-based approximation to the diffraction problem. In this paper I discuss the recent history of kernel phase, in particular in the matrix-based study of sparse arrays, and propose an analogous generalization of the closure amplitude to kernel amplitudes. This new approach can self-calibrate throughput and scintillation errors in optical imaging, which extends the power of kernel phase-like methods to symmetric targets where amplitude and not phase calibration can be a significant limitation, and will enable further developments in high angular resolution astronomy.

  15. ESTIMATING LOSS SEVERITY DISTRIBUTION: CONVOLUTION APPROACH

    Directory of Open Access Journals (Sweden)

    Ro J. Pak

    2014-01-01

    Full Text Available Financial loss can be classified into two types such as expected loss and unexpected loss. A current definition seeks to separate two losses from a total loss. In this article, however, we redefine a total loss as the sum of expected and unexpended losses; then the distribution of loss can be considered as the convolution of the distributions of both expected and unexpended losses. We propose to use a convolution of normal and exponential distribution for modelling a loss distribution. Subsequently, we compare its performance with other commonly used loss distributions. The examples of property insurance claim data are analyzed to show the applicability of this normal-exponential convolution model. Overall, we claim that the proposed model provides further useful information with regard to losses compared to existing models. We are able to provide new statistical quantities which are very critical and useful.

  16. The Urbanik Generalized Convolutions in the Non-Commutative Probability and a Forgotten Method of Constructing Generalized Convolution

    Indian Academy of Sciences (India)

    Barbara Jasiulis-Gołdyn; Anna Kula

    2012-08-01

    The paper deals with the notions of weak stability and weak generalized convolution with respect to a generalized convolution, introduced by Kucharczak and Urbanik. We study properties of such objects and give examples of weakly stable measures with respect to the Kendall convolution. Moreover, we show that in the context of non-commutative probability, two operations: the -convolution and the (,1)-convolution satisfy the Urbanik’s conditions for a generalized convolution, interpreted on the set of moment sequences. The weak stability reveals the relation between two operations.

  17. Global Polynomial Kernel Hazard Estimation

    DEFF Research Database (Denmark)

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch;

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically...

  18. Global Polynomial Kernel Hazard Estimation

    DEFF Research Database (Denmark)

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  19. Spectral classification using convolutional neural networks

    CERN Document Server

    Hála, Pavel

    2014-01-01

    There is a great need for accurate and autonomous spectral classification methods in astrophysics. This thesis is about training a convolutional neural network (ConvNet) to recognize an object class (quasar, star or galaxy) from one-dimension spectra only. Author developed several scripts and C programs for datasets preparation, preprocessing and postprocessing of the data. EBLearn library (developed by Pierre Sermanet and Yann LeCun) was used to create ConvNets. Application on dataset of more than 60000 spectra yielded success rate of nearly 95%. This thesis conclusively proved great potential of convolutional neural networks and deep learning methods in astrophysics.

  20. SAR ATR Based on Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Tian Zhuangzhuang

    2016-06-01

    Full Text Available This study presents a new method of Synthetic Aperture Radar (SAR image target recognition based on a convolutional neural network. First, we introduce a class separability measure into the cost function to improve this network’s ability to distinguish between categories. Then, we extract SAR image features using the improved convolutional neural network and classify these features using a support vector machine. Experimental results using moving and stationary target acquisition and recognition SAR datasets prove the validity of this method.

  1. Graph kernels between point clouds

    CERN Document Server

    Bach, Francis

    2007-01-01

    Point clouds are sets of points in two or three dimensions. Most kernel methods for learning on sets of points have not yet dealt with the specific geometrical invariances and practical constraints associated with point clouds in computer vision and graphics. In this paper, we present extensions of graph kernels for point clouds, which allow to use kernel methods for such ob jects as shapes, line drawings, or any three-dimensional point clouds. In order to design rich and numerically efficient kernels with as few free parameters as possible, we use kernels between covariance matrices and their factorizations on graphical models. We derive polynomial time dynamic programming recursions and present applications to recognition of handwritten digits and Chinese characters from few training examples.

  2. Kernel Generalized Noise Clustering Algorithm

    Institute of Scientific and Technical Information of China (English)

    WU Xiao-hong; ZHOU Jian-jiang

    2007-01-01

    To deal with the nonlinear separable problem, the generalized noise clustering (GNC) algorithm is extended to a kernel generalized noise clustering (KGNC) model. Different from the fuzzy c-means (FCM) model and the GNC model which are based on Euclidean distance, the presented model is based on kernel-induced distance by using kernel method. By kernel method the input data are nonlinearly and implicitly mapped into a high-dimensional feature space, where the nonlinear pattern appears linear and the GNC algorithm is performed. It is unnecessary to calculate in high-dimensional feature space because the kernel function can do itjust in input space. The effectiveness of the proposed algorithm is verified by experiments on three data sets. It is concluded that the KGNC algorithm has better clustering accuracy than FCM and GNC in clustering data sets containing noisy data.

  3. On a Generalized Hankel Type Convolution of Generalized Functions

    Indian Academy of Sciences (India)

    S P Malgonde; G S Gaikawad

    2001-11-01

    The classical generalized Hankel type convolution are defined and extended to a class of generalized functions. Algebraic properties of the convolution are explained and the existence and significance of an identity element are discussed.

  4. Semantic segmentation of bioimages using convolutional neural networks

    CSIR Research Space (South Africa)

    Wiehman, S

    2016-07-01

    Full Text Available Convolutional neural networks have shown great promise in both general image segmentation problems as well as bioimage segmentation. In this paper, the application of different convolutional network architectures is explored on the C. elegans live...

  5. A note on maximal estimates for stochastic convolutions

    NARCIS (Netherlands)

    Veraar, M.; Weis, L.

    2011-01-01

    In stochastic partial differential equations it is important to have pathwise regularity properties of stochastic convolutions. In this note we present a new sufficient condition for the pathwise continuity of stochastic convolutions in Banach spaces.

  6. Robotic intelligence kernel

    Science.gov (United States)

    Bruemmer, David J.

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  7. Flexible kernel memory.

    Science.gov (United States)

    Nowicki, Dimitri; Siegelmann, Hava

    2010-06-11

    This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces.

  8. Flexible kernel memory.

    Directory of Open Access Journals (Sweden)

    Dimitri Nowicki

    Full Text Available This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces.

  9. Convolutional Neural Networks for SAR Image Segmentation

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Nobel-Jørgensen, Morten

    2015-01-01

    Segmentation of Synthetic Aperture Radar (SAR) images has several uses, but it is a difficult task due to a number of properties related to SAR images. In this article we show how Convolutional Neural Networks (CNNs) can easily be trained for SAR image segmentation with good results. Besides...

  10. Discrete Fresnel Transform and Its Circular Convolution

    CERN Document Server

    Ouyang, Xing; Gunning, Fatima; Zhang, Hongyu; Guan, Yong Liang

    2015-01-01

    Discrete trigonometric transformations, such as the discrete Fourier and cosine/sine transforms, are important in a variety of applications due to their useful properties. For example, one well-known property is the convolution theorem for Fourier transform. In this letter, we derive a discrete Fresnel transform (DFnT) from the infinitely periodic optical gratings, as a linear trigonometric transform. Compared to the previous formulations of DFnT, the DFnT in this letter has no degeneracy, which hinders its mathematic applications, due to destructive interferences. The circular convolution property of the DFnT is studied for the first time. It is proved that the DFnT of a circular convolution of two sequences equals either one circularly convolving with the DFnT of the other. As circular convolution is a fundamental process in discrete systems, the DFnT not only gives the coefficients of the Talbot image, but can also be useful for optical and digital signal processing and numerical evaluation of the Fresnel ...

  11. Properties of derivations on some convolution algebras

    DEFF Research Database (Denmark)

    Pedersen, Thomas Vils

    2014-01-01

    For all convolution algebras L1[0; 1); L1 loc and A(!) = T n L1(!n), the derivations are of the form Dμf = Xf μ for suitable measures μ, where (Xf)(t) = tf(t). We describe the (weakly) compact as well as the (weakly) Montel derivations on these algebras in terms of properties of the measure μ...

  12. Epileptiform spike detection via convolutional neural networks

    DEFF Research Database (Denmark)

    Johansen, Alexander Rosenberg; Jin, Jing; Maszczyk, Tomasz

    2016-01-01

    The EEG of epileptic patients often contains sharp waveforms called "spikes", occurring between seizures. Detecting such spikes is crucial for diagnosing epilepsy. In this paper, we develop a convolutional neural network (CNN) for detecting spikes in EEG of epileptic patients in an automated fash...

  13. Quasi-cyclic unit memory convolutional codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Paaske, Erik; Ballan, Mark

    1990-01-01

    Unit memory convolutional codes with generator matrices, which are composed of circulant submatrices, are introduced. This structure facilitates the analysis of efficient search for good codes. Equivalences among such codes and some of the basic structural properties are discussed. In particular...

  14. Convolutions with the Continuous Primitive Integral

    Directory of Open Access Journals (Sweden)

    Erik Talvila

    2009-01-01

    I⊂ℝ. When g∈L1, the estimate is ‖f∗g‖≤‖f‖‖g‖1. There are results on differentiation and integration of convolutions. A type of Fubini theorem is proved for the continuous primitive integral.

  15. Mixture Density Mercer Kernels: A Method to Learn Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...

  16. Population Density Equations for Stochastic Processes with Memory Kernels

    CERN Document Server

    Lai, Yi Ming

    2016-01-01

    We present a novel method for solving population density equations, where the populations can be subject to non-Markov noise for arbitrary distributions of jump sizes. There are important advantages over earlier methods: instead of introducing an extra dimension, we find that the history of the noise process can always be accounted for by the convolution of a kernel of limited depth with a history of the density, rendering the method more efficient. Excitatory and inhibitory input contributions can be treated on equal footing. Transient results can be modeled accurately, which is of vital importance as population density methods are increasingly used to model neural circuits. This method can be used in network simulations where analytic results are not available. The method cleanly separates deterministic and stochastic processes, leaving only the evolution of the stochastic process to be solved. This allows for a direct incorporation of novel developments in the theory of random walks. We demonstrate this by...

  17. 7 CFR 981.9 - Kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels,...

  18. (Pre)kernel catchers for cooperative games

    NARCIS (Netherlands)

    Chang, Chih; Driessen, Theo

    1995-01-01

    The paper provides a new (pre)kernel catcher in that the relevant set always contains the (pre)kernel. This new (pre)kernel catcher gives rise to a better lower bound ɛ*** such that the kernel is included in strong ɛ-cores for all real numbers ɛ not smaller than the relevant bound ɛ***.

  19. 7 CFR 51.2295 - Half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off....

  20. Alternative theory of diffraction grating spectral device and its application for calculation of convolution and correlation of optical pulse signals

    Science.gov (United States)

    Kazakov, Vasily I.; Moskaletz, Dmitry O.; Moskaletz, Oleg D.

    2016-04-01

    A new, alternative theory of diffraction grating spectral device which is based on the mathematical analysis of the optical signal transformation from the input aperture of spectral device to result of photo detection is proposed. Exhaustive characteristics of the diffraction grating spectral device - its complex and power spread functions as the kernels of the corresponding integral operator, describing the optical signal transformation by spectral device is obtained. On the basis of the proposed alternative theory the possibility of using the diffraction grating spectral device for calculation of convolution and correlation of optical pulse signals is showed.

  1. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  2. Kernel model-based diagnosis

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The methods for computing the kemel consistency-based diagnoses and the kernel abductive diagnoses are only suited for the situation where part of the fault behavioral modes of the components are known. The characterization of the kernel model-based diagnosis based on the general causal theory is proposed, which can break through the limitation of the above methods when all behavioral modes of each component are known. Using this method, when observation subsets deduced logically are respectively assigned to the empty or the whole observation set, the kernel consistency-based diagnoses and the kernel abductive diagnoses can deal with all situations. The direct relationship between this diagnostic procedure and the prime implicants/implicates is proved, thus linking theoretical result with implementation.

  3. Decoding of Convolutional Codes over the Erasure Channel

    CERN Document Server

    Tomás, Virtudes; Smarandache, Roxana

    2010-01-01

    In this paper we study the decoding capabilities of convolutional codes over the erasure channel. Of special interest will be maximum distance profile (MDP) convolutional codes. These are codes which have a maximum possible column distance increase. We show how this strong minimum distance condition of MDP convolutional codes help us to solve error situations that maximum distance separable (MDS) block codes fail to solve. Towards this goal, we define two subclasses of MDP codes: reverse-MDP convolutional codes and complete-MDP convolutional codes. Reverse-MDP codes have the capability to recover a maximum number of erasures using an algorithm which runs backward in time. Complete-MDP convolutional codes are both MDP and reverse-MDP codes. They are capable to recover the state of the decoder under the mildest condition. We show that complete-MDP convolutional codes perform in certain sense better than MDS block codes of the same rate over the erasure channel.

  4. Notes on the gamma kernel

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole E.

    The density function of the gamma distribution is used as shift kernel in Brownian semistationary processes modelling the timewise behaviour of the velocity in turbulent regimes. This report presents exact and asymptotic properties of the second order structure function under such a model......, and relates these to results of von Karmann and Horwath. But first it is shown that the gamma kernel is interpretable as a Green’s function....

  5. Kernel Rootkits Implement and Detection

    Institute of Scientific and Technical Information of China (English)

    LI Xianghe; ZHANG Liancheng; LI Shuo

    2006-01-01

    Rootkits, which unnoticeably reside in your computer, stealthily carry on remote control and software eavesdropping, are a great threat to network and computer security. It' time to acquaint ourselves with their implement and detection. This article pays more attention to kernel rootkits, because they are more difficult to compose and to be identified than useland rootkits. The latest technologies used to write and detect kernel rootkits, along with their advantages and disadvantages, are present in this article.

  6. A convolutional neural network neutrino event classifier

    Science.gov (United States)

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  7. Transition Mean Values of Shifted Convolution Sums

    CERN Document Server

    Petrow, Ian

    2011-01-01

    Let f be a classical holomorphic cusp form for SL_2(Z) of weight k which is a normalized eigenfunction for the Hecke algebra, and let \\lambda(n) be its eigenvalues. In this paper we study "shifted convolution sums" of the eigenvalues \\lambda(n) after averaging over many shifts h and obtain asymptotic estimates. The result is somewhat surprising: one encounters a transition region depending on the ratio of the square of the length of the average over h to the length of the shifted convolution sum. The phenomenon is similar to that encountered by Conrey, Farmer and Soundararajan in their 2000 paper Transition Mean Values of Real Characters, and the connection of both results to Eisenstein series and multiple Dirichlet series is discussed.

  8. A Convolutional Neural Network Neutrino Event Classifier

    CERN Document Server

    Aurisano, A; Rocco, D; Himmel, A; Messier, M D; Niner, E; Pawloski, G; Psihas, F; Sousa, A; Vahle, P

    2016-01-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  9. A convolution-superposition dose calculation engine for GPUs

    Energy Technology Data Exchange (ETDEWEB)

    Hissoiny, Sami; Ozell, Benoit; Despres, Philippe [Departement de genie informatique et genie logiciel, Ecole polytechnique de Montreal, 2500 Chemin de Polytechnique, Montreal, Quebec H3T 1J4 (Canada); Departement de radio-oncologie, CRCHUM-Centre hospitalier de l' Universite de Montreal, 1560 rue Sherbrooke Est, Montreal, Quebec H2L 4M1 (Canada)

    2010-03-15

    Purpose: Graphic processing units (GPUs) are increasingly used for scientific applications, where their parallel architecture and unprecedented computing power density can be exploited to accelerate calculations. In this paper, a new GPU implementation of a convolution/superposition (CS) algorithm is presented. Methods: This new GPU implementation has been designed from the ground-up to use the graphics card's strengths and to avoid its weaknesses. The CS GPU algorithm takes into account beam hardening, off-axis softening, kernel tilting, and relies heavily on raytracing through patient imaging data. Implementation details are reported as well as a multi-GPU solution. Results: An overall single-GPU acceleration factor of 908x was achieved when compared to a nonoptimized version of the CS algorithm implemented in PlanUNC in single threaded central processing unit (CPU) mode, resulting in approximatively 2.8 s per beam for a 3D dose computation on a 0.4 cm grid. A comparison to an established commercial system leads to an acceleration factor of approximately 29x or 0.58 versus 16.6 s per beam in single threaded mode. An acceleration factor of 46x has been obtained for the total energy released per mass (TERMA) calculation and a 943x acceleration factor for the CS calculation compared to PlanUNC. Dose distributions also have been obtained for a simple water-lung phantom to verify that the implementation gives accurate results. Conclusions: These results suggest that GPUs are an attractive solution for radiation therapy applications and that careful design, taking the GPU architecture into account, is critical in obtaining significant acceleration factors. These results potentially can have a significant impact on complex dose delivery techniques requiring intensive dose calculations such as intensity-modulated radiation therapy (IMRT) and arc therapy. They also are relevant for adaptive radiation therapy where dose results must be obtained rapidly.

  10. Rational Convolution Roots of Isobaric Polynomials

    OpenAIRE

    Conci, Aura; Li, Huilan; MacHenry, Trueman

    2014-01-01

    In this paper, we exhibit two matrix representations of the rational roots of generalized Fibonacci polynomials (GFPs) under convolution product, in terms of determinants and permanents, respectively. The underlying root formulas for GFPs and for weighted isobaric polynomials (WIPs), which appeared in an earlier paper by MacHenry and Tudose, make use of two types of operators. These operators are derived from the generating functions for Stirling numbers of the first kind and second kind. Hen...

  11. A Generative Model for Deep Convolutional Learning

    OpenAIRE

    Pu, Yunchen; Yuan, Xin; Carin, Lawrence

    2015-01-01

    A generative model is developed for deep (multi-layered) convolutional dictionary learning. A novel probabilistic pooling operation is integrated into the deep model, yielding efficient bottom-up (pretraining) and top-down (refinement) probabilistic learning. Experimental results demonstrate powerful capabilities of the model to learn multi-layer features from images, and excellent classification results are obtained on the MNIST and Caltech 101 datasets.

  12. Convolutional Neural Network Based dem Super Resolution

    Science.gov (United States)

    Chen, Zixuan; Wang, Xuewen; Xu, Zekai; Hou, Wenguang

    2016-06-01

    DEM super resolution is proposed in our previous publication to improve the resolution for a DEM on basis of some learning examples. Meanwhile, the nonlocal algorithm is introduced to deal with it and lots of experiments show that the strategy is feasible. In our publication, the learning examples are defined as the partial original DEM and their related high measurements due to this way can avoid the incompatibility between the data to be processed and the learning examples. To further extent the applications of this new strategy, the learning examples should be diverse and easy to obtain. Yet, it may cause the problem of incompatibility and unrobustness. To overcome it, we intend to investigate a convolutional neural network based method. The input of the convolutional neural network is a low resolution DEM and the output is expected to be its high resolution one. A three layers model will be adopted. The first layer is used to detect some features from the input, the second integrates the detected features to some compressed ones and the final step transforms the compressed features as a new DEM. According to this designed structure, some learning DEMs will be taken to train it. Specifically, the designed network will be optimized by minimizing the error of the output and its expected high resolution DEM. In practical applications, a testing DEM will be input to the convolutional neural network and a super resolution will be obtained. Many experiments show that the CNN based method can obtain better reconstructions than many classic interpolation methods.

  13. Kernel versions of some orthogonal transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    Kernel versions of orthogonal transformations such as principal components are based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced...... by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel...... function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA) and kernel minimum noise fraction (MNF) analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function...

  14. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  15. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  16. Integral equations with contrasting kernels

    Directory of Open Access Journals (Sweden)

    Theodore Burton

    2008-01-01

    Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.

  17. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  18. Kernel learning algorithms for face recognition

    CERN Document Server

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  19. The convolution theorem for two-dimensional continuous wavelet transform

    Institute of Scientific and Technical Information of China (English)

    ZHANG CHI

    2013-01-01

    In this paper , application of two -dimensional continuous wavelet transform to image processes is studied. We first show that the convolution and correlation of two continuous wavelets satisfy the required admissibility and regularity conditions ,and then we derive the convolution and correlation theorem for two-dimensional continuous wavelet transform. Finally, we present numerical example showing the usefulness of applying the convolution theorem for two -dimensional continuous wavelet transform to perform image restoration in the presence of additive noise.

  20. An Algorithm for the Convolution of Legendre Series

    KAUST Repository

    Hale, Nicholas

    2014-01-01

    An O(N2) algorithm for the convolution of compactly supported Legendre series is described. The algorithm is derived from the convolution theorem for Legendre polynomials and the recurrence relation satisfied by spherical Bessel functions. Combining with previous work yields an O(N 2) algorithm for the convolution of Chebyshev series. Numerical results are presented to demonstrate the improved efficiency over the existing algorithm. © 2014 Society for Industrial and Applied Mathematics.

  1. BERNOULLI CONVOLUTIONS ASSOCIATED WITH CERTAIN NON-PISOT NUMBERS

    Institute of Scientific and Technical Information of China (English)

    Feng Dejun; Wang Yang

    2003-01-01

    The Bernoulli convolution vλ measure is shown to be absolutely continuous with L2 density for almost all 1/2<λ<1,and singular if λ-1 is a Pisot number.It is an open question whether the Pisot type Bernoulli convolutions are the only singular ones.In this paper,we construct a family of non-Pisot type Bernoulli convolutions vλ such that their density functions,if they excist,are not L2.We also construct other Bernolulli convolutions whose density functions,if they exist,behave rather badly.

  2. Convolutions Induced Discrete Probability Distributions and a New Fibonacci Constant

    CERN Document Server

    Rajan, Arulalan; Rao, Vittal; Rao, Ashok

    2010-01-01

    This paper proposes another constant that can be associated with Fibonacci sequence. In this work, we look at the probability distributions generated by the linear convolution of Fibonacci sequence with itself, and the linear convolution of symmetrized Fibonacci sequence with itself. We observe that for a distribution generated by the linear convolution of the standard Fibonacci sequence with itself, the variance converges to 8.4721359... . Also, for a distribution generated by the linear convolution of symmetrized Fibonacci sequences, the variance converges in an average sense to 17.1942 ..., which is approximately twice that we get with common Fibonacci sequence.

  3. Semi-Supervised Kernel PCA

    DEFF Research Database (Denmark)

    Walder, Christian; Henao, Ricardo; Mørup, Morten

    We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least...... squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets....

  4. Congruence Kernels of Orthoimplication Algebras

    Directory of Open Access Journals (Sweden)

    I. Chajda

    2007-10-01

    Full Text Available Abstracting from certain properties of the implication operation in Boolean algebras leads to so-called orthoimplication algebras. These are in a natural one-to-one correspondence with families of compatible orthomodular lattices. It is proved that congruence kernels of orthoimplication algebras are in a natural one-to-one correspondence with families of compatible p-filters on the corresponding orthomodular lattices. Finally, it is proved that the lattice of all congruence kernels of an orthoimplication algebra is relatively pseudocomplemented and a simple description of the relative pseudocomplement is given.

  5. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  6. Bergman kernel on generalized exceptional Hua domain

    Institute of Scientific and Technical Information of China (English)

    YIN; weipng(殷慰萍); ZHAO; zhengang(赵振刚)

    2002-01-01

    We have computed the Bergman kernel functions explicitly for two types of generalized exceptional Hua domains, and also studied the asymptotic behavior of the Bergman kernel function of exceptional Hua domain near boundary points, based on Appell's multivariable hypergeometric function.

  7. Applications of convolution voltammetry in electroanalytical chemistry.

    Science.gov (United States)

    Bentley, Cameron L; Bond, Alan M; Hollenkamp, Anthony F; Mahon, Peter J; Zhang, Jie

    2014-02-18

    The robustness of convolution voltammetry for determining accurate values of the diffusivity (D), bulk concentration (C(b)), and stoichiometric number of electrons (n) has been demonstrated by applying the technique to a series of electrode reactions in molecular solvents and room temperature ionic liquids (RTILs). In acetonitrile, the relatively minor contribution of nonfaradaic current facilitates analysis with macrodisk electrodes, thus moderate scan rates can be used without the need to perform background subtraction to quantify the diffusivity of iodide [D = 1.75 (±0.02) × 10(-5) cm(2) s(-1)] in this solvent. In the RTIL 1-ethyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide, background subtraction is necessary at a macrodisk electrode but can be avoided at a microdisk electrode, thereby simplifying the analytical procedure and allowing the diffusivity of iodide [D = 2.70 (±0.03) × 10(-7) cm(2) s(-1)] to be quantified. Use of a convolutive procedure which simultaneously allows D and nC(b) values to be determined is also demonstrated. Three conditions under which a technique of this kind may be applied are explored and are related to electroactive species which display slow dissolution kinetics, undergo a single multielectron transfer step, or contain multiple noninteracting redox centers using ferrocene in an RTIL, 1,4-dinitro-2,3,5,6-tetramethylbenzene, and an alkynylruthenium trimer, respectively, as examples. The results highlight the advantages of convolution voltammetry over steady-state techniques such as rotating disk electrode voltammetry and microdisk electrode voltammetry, as it is not restricted by the mode of diffusion (planar or radial), hence removing limitations on solvent viscosity, electrode geometry, and voltammetric scan rate.

  8. Convolution neural networks for ship type recognition

    Science.gov (United States)

    Rainey, Katie; Reeder, John D.; Corelli, Alexander G.

    2016-05-01

    Algorithms to automatically recognize ship type from satellite imagery are desired for numerous maritime applications. This task is difficult, and example imagery accurately labeled with ship type is hard to obtain. Convolutional neural networks (CNNs) have shown promise in image recognition settings, but many of these applications rely on the availability of thousands of example images for training. This work attempts to under- stand for which types of ship recognition tasks CNNs might be well suited. We report the results of baseline experiments applying a CNN to several ship type classification tasks, and discuss many of the considerations that must be made in approaching this problem.

  9. Fourier transforms and convolutions for the experimentalist

    CERN Document Server

    Jennison, RC

    1961-01-01

    Fourier Transforms and Convolutions for the Experimentalist provides the experimentalist with a guide to the principles and practical uses of the Fourier transformation. It aims to bridge the gap between the more abstract account of a purely mathematical approach and the rule of thumb calculation and intuition of the practical worker. The monograph springs from a lecture course which the author has given in recent years and for which he has drawn upon a number of sources, including a set of notes compiled by the late Dr. I. C. Browne from a series of lectures given by Mr. J . A. Ratcliffe of t

  10. Zebrafish tracking using convolutional neural networks

    Science.gov (United States)

    XU, Zhiping; Cheng, Xi En

    2017-01-01

    Keeping identity for a long term after occlusion is still an open problem in the video tracking of zebrafish-like model animals, and accurate animal trajectories are the foundation of behaviour analysis. We utilize the highly accurate object recognition capability of a convolutional neural network (CNN) to distinguish fish of the same congener, even though these animals are indistinguishable to the human eye. We used data augmentation and an iterative CNN training method to optimize the accuracy for our classification task, achieving surprisingly accurate trajectories of zebrafish of different size and age zebrafish groups over different time spans. This work will make further behaviour analysis more reliable. PMID:28211462

  11. Zebrafish tracking using convolutional neural networks

    Science.gov (United States)

    Xu, Zhiping; Cheng, Xi En

    2017-02-01

    Keeping identity for a long term after occlusion is still an open problem in the video tracking of zebrafish-like model animals, and accurate animal trajectories are the foundation of behaviour analysis. We utilize the highly accurate object recognition capability of a convolutional neural network (CNN) to distinguish fish of the same congener, even though these animals are indistinguishable to the human eye. We used data augmentation and an iterative CNN training method to optimize the accuracy for our classification task, achieving surprisingly accurate trajectories of zebrafish of different size and age zebrafish groups over different time spans. This work will make further behaviour analysis more reliable.

  12. A kernel version of multivariate alteration detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2013-01-01

    Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....

  13. Random Feature Maps for Dot Product Kernels

    OpenAIRE

    Kar, Purushottam; Karnick, Harish

    2012-01-01

    Approximating non-linear kernels using feature maps has gained a lot of interest in recent years due to applications in reducing training and testing times of SVM classifiers and other kernel based learning algorithms. We extend this line of work and present low distortion embeddings for dot product kernels into linear Euclidean spaces. We base our results on a classical result in harmonic analysis characterizing all dot product kernels and use it to define randomized feature maps into explic...

  14. ks: Kernel Density Estimation and Kernel Discriminant Analysis for Multivariate Data in R

    Directory of Open Access Journals (Sweden)

    Tarn Duong

    2007-09-01

    Full Text Available Kernel smoothing is one of the most widely used non-parametric data smoothing techniques. We introduce a new R package ks for multivariate kernel smoothing. Currently it contains functionality for kernel density estimation and kernel discriminant analysis. It is a comprehensive package for bandwidth matrix selection, implementing a wide range of data-driven diagonal and unconstrained bandwidth selectors.

  15. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.

    Science.gov (United States)

    Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L

    2017-04-27

    In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

  16. Active contour external force using vector field convolution for image segmentation.

    Science.gov (United States)

    Li, Bing; Acton, Scott T

    2007-08-01

    Snakes, or active contours, have been widely used in image processing applications. Typical roadblocks to consistent performance include limited capture range, noise sensitivity, and poor convergence to concavities. This paper proposes a new external force for active contours, called vector field convolution (VFC), to address these problems. VFC is calculated by convolving the edge map generated from the image with the user-defined vector field kernel. We propose two structures for the magnitude function of the vector field kernel, and we provide an analytical method to estimate the parameter of the magnitude function. Mixed VFC is introduced to alleviate the possible leakage problem caused by choosing inappropriate parameters. We also demonstrate that the standard external force and the gradient vector flow (GVF) external force are special cases of VFC in certain scenarios. Examples and comparisons with GVF are presented in this paper to show the advantages of this innovation, including superior noise robustness, reduced computational cost, and the flexibility of tailoring the force field.

  17. Local Observed-Score Kernel Equating

    Science.gov (United States)

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  18. Computations of Bergman Kernels on Hua Domains

    Institute of Scientific and Technical Information of China (English)

    殷慰萍; 王安; 赵振刚; 赵晓霞; 管冰辛

    2001-01-01

    @@The Bergman kernel function plays an important ro1e in several complex variables.There exists the Bergman kernel function on any bounded domain in Cn. But we can get the Bergman kernel functions in explicit formulas for a few types of domains only,for example:the bounded homogeneous domains and the egg domain in some cases.

  19. Veto-Consensus Multiple Kernel Learning

    NARCIS (Netherlands)

    Y. Zhou; N. Hu; C.J. Spanos

    2016-01-01

    We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The propose

  20. Accelerating the Original Profile Kernel.

    Directory of Open Access Journals (Sweden)

    Tobias Hamp

    Full Text Available One of the most accurate multi-class protein classification systems continues to be the profile-based SVM kernel introduced by the Leslie group. Unfortunately, its CPU requirements render it too slow for practical applications of large-scale classification tasks. Here, we introduce several software improvements that enable significant acceleration. Using various non-redundant data sets, we demonstrate that our new implementation reaches a maximal speed-up as high as 14-fold for calculating the same kernel matrix. Some predictions are over 200 times faster and render the kernel as possibly the top contender in a low ratio of speed/performance. Additionally, we explain how to parallelize various computations and provide an integrative program that reduces creating a production-quality classifier to a single program call. The new implementation is available as a Debian package under a free academic license and does not depend on commercial software. For non-Debian based distributions, the source package ships with a traditional Makefile-based installer. Download and installation instructions can be found at https://rostlab.org/owiki/index.php/Fast_Profile_Kernel. Bugs and other issues may be reported at https://rostlab.org/bugzilla3/enter_bug.cgi?product=fastprofkernel.

  1. An Extreme Learning Machine Based on the Mixed Kernel Function of Triangular Kernel and Generalized Hermite Dirichlet Kernel

    Directory of Open Access Journals (Sweden)

    Senyue Zhang

    2016-01-01

    Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.

  2. One dimensional Convolutional Goppa Codes over the projective line

    CERN Document Server

    Pérez, J A Domínguez; Sotelo, G Serrano

    2011-01-01

    We give a general method to construct MDS one-dimensional convolutional codes. Our method generalizes previous constructions of H. Gluesing-Luerssen and B. Langfeld. Moreover we give a classification of one-dimensional Convolutional Goppa Codes and propose a characterization of MDS codes of this type.

  3. Explicit solutions of fractional diffusion equations via Generalized Gamma Convolution

    CERN Document Server

    D'Ovidio, Mirko

    2010-01-01

    In this paper we deal with Mellin convolution of generalized Gamma densities which brings to integrals of modified Bessel functions of the second kind. Such convolutions allow us to write explicitly the solutions of the time-fractional diffusion equations involving the adjoint operators of a square Bessel process and a Bessel process.

  4. Convolutive ICA for Spatio-Temporal Analysis of EEG

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai

    2007-01-01

    in the convolutive model can be correctly detected using Bayesian model selection. We demonstrate a framework for deconvolving an EEG ICA subspace. Initial results suggest that in some cases convolutive mixing may be a more realistic model for EEG signals than the instantaneous ICA model....

  5. Random Feature Maps for Dot Product Kernels

    CERN Document Server

    Kar, Purushottam

    2012-01-01

    Approximating non-linear kernels using feature maps has gained a lot of interest in recent years due to applications in reducing training and testing times of SVM classifiers and other kernel based learning algorithms. We extend this line of work and present low distortion embeddings for dot product kernels into linear Euclidean spaces. We base our results on a classical result in harmonic analysis characterizing all dot product kernels and use it to define randomized feature maps into explicit low dimensional Euclidean spaces in which the native dot product provides an approximation to the dot product kernel with high confidence.

  6. Convolution of Lorentz Invariant Ultradistributions and Field Theory

    CERN Document Server

    Bollini, C G

    2003-01-01

    In this work, a general definition of convolution between two arbitrary four dimensional Lorentz invariant (fdLi) Tempered Ultradistributions is given, in both: Minkowskian and Euclidean Space (Spherically symmetric tempered ultradistributions). The product of two arbitrary fdLi distributions of exponential type is defined via the convolution of its corresponding Fourier Transforms. Several examples of convolution of two fdLi Tempered Ultradistributions are given. In particular we calculate exactly the convolution of two Feynman's massless propagators. An expression for the Fourier Transform of a Lorentz invariant Tempered Ultradistribution in terms of modified Bessel distributions is obtained in this work (Generalization of Bochner's formula to Minkowskian space). At the same time, and in a previous step used for the deduction of the convolution formula, we obtain the generalization to the Minkowskian space, of the dimensional regularization of the perturbation theory of Green Functions in the Euclidean conf...

  7. Testing Infrastructure for Operating System Kernel Development

    DEFF Research Database (Denmark)

    Walter, Maxwell; Karlsson, Sven

    2014-01-01

    Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....

  8. Speech Enhancement Using Kernel and Normalized Kernel Affine Projection Algorithm

    Directory of Open Access Journals (Sweden)

    Bolimera Ravi

    2013-08-01

    Full Text Available The goal of this paper is to investigate the speech signal enhancement using Kernel Affine ProjectionAlgorithm (KAPA and Normalized KAPA. The removal of background noise is very important in manyapplications like speech recognition, telephone conversations, hearing aids, forensic, etc. Kernel adaptivefilters shown good performance for removal of noise. If the evaluation of background noise is more slowlythan the speech, i.e., noise signal is more stationary than the speech, we can easily estimate the noiseduring the pauses in speech. Otherwise it is more difficult to estimate the noise which results indegradation of speech. In order to improve the quality and intelligibility of speech, unlike time andfrequency domains, we can process the signal in new domain like Reproducing Kernel Hilbert Space(RKHS for high dimensional to yield more powerful nonlinear extensions. For experiments, we have usedthe database of noisy speech corpus (NOIZEUS. From the results, we observed the removal noise in RKHShas great performance in signal to noise ratio values in comparison with conventional adaptive filters.

  9. Compressed imaging by sparse random convolution.

    Science.gov (United States)

    Marcos, Diego; Lasser, Theo; López, Antonio; Bourquard, Aurélien

    2016-01-25

    The theory of compressed sensing (CS) shows that signals can be acquired at sub-Nyquist rates if they are sufficiently sparse or compressible. Since many images bear this property, several acquisition models have been proposed for optical CS. An interesting approach is random convolution (RC). In contrast with single-pixel CS approaches, RC allows for the parallel capture of visual information on a sensor array as in conventional imaging approaches. Unfortunately, the RC strategy is difficult to implement as is in practical settings due to important contrast-to-noise-ratio (CNR) limitations. In this paper, we introduce a modified RC model circumventing such difficulties by considering measurement matrices involving sparse non-negative entries. We then implement this model based on a slightly modified microscopy setup using incoherent light. Our experiments demonstrate the suitability of this approach for dealing with distinct CS scenarii, including 1-bit CS.

  10. Relationships among transforms, convolutions, and first variations

    Directory of Open Access Journals (Sweden)

    Jeong Gyoo Kim

    1999-01-01

    Full Text Available In this paper, we establish several interesting relationships involving the Fourier-Feynman transform, the convolution product, and the first variation for functionals F on Wiener space of the form F(x=f(〈α1,x〉,…,〈αn,x〉,                                                      (* where 〈αj,x〉 denotes the Paley-Wiener-Zygmund stochastic integral ∫0Tαj(tdx(t.

  11. Robust Convolutional Neural Networks for Image Recognition

    Directory of Open Access Journals (Sweden)

    Hayder M. Albeahdili

    2015-11-01

    Full Text Available Recently image recognition becomes vital task using several methods. One of the most interesting used methods is using Convolutional Neural Network (CNN. It is widely used for this purpose. However, since there are some tasks that have small features that are considered an essential part of a task, then classification using CNN is not efficient because most of those features diminish before reaching the final stage of classification. In this work, analyzing and exploring essential parameters that can influence model performance. Furthermore different elegant prior contemporary models are recruited to introduce new leveraging model. Finally, a new CNN architecture is proposed which achieves state-of-the-art classification results on the different challenge benchmarks. The experimented are conducted on MNIST, CIFAR-10, and CIFAR-100 datasets. Experimental results showed that the results outperform and achieve superior results comparing to the most contemporary approaches.

  12. An exactly solvable self-convolutive recurrence

    CERN Document Server

    Martin, Richard J

    2011-01-01

    We consider a self-convolutive recurrence whose solution is the sequence of coefficients in the asymptotic expansion of the logarithmic derivative of the confluent hypergeometic function $U(a,b,z)$. By application of the Hilbert transform we convert this expression into an explicit, non-recursive solution in which the $n$th coefficient is expressed as the $(n-1)$th moment of a measure, and also as the trace of the $(n-1)$th iterate of a linear operator. Applications of these sequences, and hence of the explicit solution provided, are found in quantum field theory as the number of Feynman diagrams of a certain type and order, in Brownian motion theory, and in combinatorics.

  13. Robust smile detection using convolutional neural networks

    Science.gov (United States)

    Bianco, Simone; Celona, Luigi; Schettini, Raimondo

    2016-11-01

    We present a fully automated approach for smile detection. Faces are detected using a multiview face detector and aligned and scaled using automatically detected eye locations. Then, we use a convolutional neural network (CNN) to determine whether it is a smiling face or not. To this end, we investigate different shallow CNN architectures that can be trained even when the amount of learning data is limited. We evaluate our complete processing pipeline on the largest publicly available image database for smile detection in an uncontrolled scenario. We investigate the robustness of the method to different kinds of geometric transformations (rotation, translation, and scaling) due to imprecise face localization, and to several kinds of distortions (compression, noise, and blur). To the best of our knowledge, this is the first time that this type of investigation has been performed for smile detection. Experimental results show that our proposal outperforms state-of-the-art methods on both high- and low-quality images.

  14. Nonlinear Deep Kernel Learning for Image Annotation.

    Science.gov (United States)

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  15. Terminated LDPC Convolutional Codes over GF(2^p)

    CERN Document Server

    Uchikawa, Hironori; Sakaniwa, Kohichi

    2010-01-01

    In this paper, we present a construction method of terminated non-binary low-density parity-check (LDPC) convolutional codes. Our construction method is an expansion of Felstrom and Zigangirov construction for non-binary LDPC convolutional codes. The rate-compatibility of the non-binary LDPC convolutional codes is also discussed. The proposed rate-compatible code is designed from one single mother (2,4)-regular non-binary LDPC convolutional code of rate 1/2. Higher-rate codes are produced by puncturing the mother code and lower-rate codes are produced by multiplicatively repeating the mother code. For moderate values of the syndrome former memory, simulation results show that mother non-binary LDPC convolutional code outperform binary LDPC convolutional codes with comparable constraint bit length. And the derived low-rate and high-rate non-binary LDPC convolutional codes exhibit good decoding performance without loss of large gap to the Shannon limits.

  16. Nonlinear projection trick in kernel methods: an alternative to the kernel trick.

    Science.gov (United States)

    Kwak, Nojun

    2013-12-01

    In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach.

  17. The Law of Large Numbers for the Free Multiplicative Convolution

    DEFF Research Database (Denmark)

    Haagerup, Uffe; Möller, Sören

    2013-01-01

    In classical probability the law of large numbers for the multiplicative convolution follows directly from the law for the additive convolution. In free probability this is not the case. The free additive law was proved by D. Voiculescu in 1986 for probability measures with bounded support...... for the case of bounded support. In contrast to the classical multiplicative convolution case, the limit measure for the free multiplicative law of large numbers is not a Dirac measure, unless the original measure is a Dirac measure. We also show that the mean value of lnx is additive with respect to the free...

  18. Theory of reproducing kernels and applications

    CERN Document Server

    Saitoh, Saburou

    2016-01-01

    This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...

  19. Kernel principal component analysis for change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Morton, J.C.

    2008-01-01

    region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....

  20. Tame Kernels of Pure Cubic Fields

    Institute of Scientific and Technical Information of China (English)

    Xiao Yun CHENG

    2012-01-01

    In this paper,we study the p-rank of the tame kernels of pure cubic fields.In particular,we prove that for a fixed positive integer m,there exist infinitely many pure cubic fields whose 3-rank of the tame kernel equal to m.As an application,we determine the 3-rank of their tame kernels for some special pure cubic fields.

  1. Kernel Factor Analysis Algorithm with Varimax

    Institute of Scientific and Technical Information of China (English)

    Xia Guoen; Jin Weidong; Zhang Gexiang

    2006-01-01

    Kernal factor analysis (KFA) with varimax was proposed by using Mercer kernel function which can map the data in the original space to a high-dimensional feature space, and was compared with the kernel principle component analysis (KPCA). The results show that the best error rate in handwritten digit recognition by kernel factor analysis with varimax (4.2%) was superior to KPCA (4.4%). The KFA with varimax could more accurately image handwritten digit recognition.

  2. Convergence of barycentric coordinates to barycentric kernels

    KAUST Repository

    Kosinka, Jiří

    2016-02-12

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  3. Efficient classification for additive kernel SVMs.

    Science.gov (United States)

    Maji, Subhransu; Berg, Alexander C; Malik, Jitendra

    2013-01-01

    We show that a class of nonlinear kernel SVMs admits approximate classifiers with runtime and memory complexity that is independent of the number of support vectors. This class of kernels, which we refer to as additive kernels, includes widely used kernels for histogram-based image comparison like intersection and chi-squared kernels. Additive kernel SVMs can offer significant improvements in accuracy over linear SVMs on a wide variety of tasks while having the same runtime, making them practical for large-scale recognition or real-time detection tasks. We present experiments on a variety of datasets, including the INRIA person, Daimler-Chrysler pedestrians, UIUC Cars, Caltech-101, MNIST, and USPS digits, to demonstrate the effectiveness of our method for efficient evaluation of SVMs with additive kernels. Since its introduction, our method has become integral to various state-of-the-art systems for PASCAL VOC object detection/image classification, ImageNet Challenge, TRECVID, etc. The techniques we propose can also be applied to settings where evaluation of weighted additive kernels is required, which include kernelized versions of PCA, LDA, regression, k-means, as well as speeding up the inner loop of SVM classifier training algorithms.

  4. Interpolating and filtering decoding algorithm for convolution codes

    Directory of Open Access Journals (Sweden)

    O. O. Shpylka

    2010-01-01

    Full Text Available There has been synthesized interpolating and filtering decoding algorithm for convolution codes on maximum of a posteriori probability criterion, in which combined filtering coder state and interpolation of information signs on sliding interval are processed

  5. FPGA-based digital convolution for wireless applications

    CERN Document Server

    Guan, Lei

    2017-01-01

    This book presents essential perspectives on digital convolutions in wireless communications systems and illustrates their corresponding efficient real-time field-programmable gate array (FPGA) implementations. Covering these digital convolutions from basic concept to vivid simulation/illustration, the book is also supplemented with MS PowerPoint presentations to aid in comprehension. FPGAs or generic all programmable devices will soon become widespread, serving as the “brains” of all types of real-time smart signal processing systems, like smart networks, smart homes and smart cities. The book examines digital convolution by bringing together the following main elements: the fundamental theory behind the mathematical formulae together with corresponding physical phenomena; virtualized algorithm simulation together with benchmark real-time FPGA implementations; and detailed, state-of-the-art case studies on wireless applications, including popular linear convolution in digital front ends (DFEs); nonlinear...

  6. Model Convolution: A Computational Approach to Digital Image Interpretation

    Science.gov (United States)

    Gardner, Melissa K.; Sprague, Brian L.; Pearson, Chad G.; Cosgrove, Benjamin D.; Bicek, Andrew D.; Bloom, Kerry; Salmon, E. D.

    2010-01-01

    Digital fluorescence microscopy is commonly used to track individual proteins and their dynamics in living cells. However, extracting molecule-specific information from fluorescence images is often limited by the noise and blur intrinsic to the cell and the imaging system. Here we discuss a method called “model-convolution,” which uses experimentally measured noise and blur to simulate the process of imaging fluorescent proteins whose spatial distribution cannot be resolved. We then compare model-convolution to the more standard approach of experimental deconvolution. In some circumstances, standard experimental deconvolution approaches fail to yield the correct underlying fluorophore distribution. In these situations, model-convolution removes the uncertainty associated with deconvolution and therefore allows direct statistical comparison of experimental and theoretical data. Thus, if there are structural constraints on molecular organization, the model-convolution method better utilizes information gathered via fluorescence microscopy, and naturally integrates experiment and theory. PMID:20461132

  7. Performing edge detection by Difference of Gaussians using q-Gaussian kernels

    CERN Document Server

    Assirati, Lucas; Berton, Lilian; Lopes, Alneu de A; Bruno, Odemir M

    2013-01-01

    In image processing, edge detection is a valuable tool to perform the extraction of features from an image. This detection reduces the amount of information to be processed, since the redundant information (considered less relevant) can be unconsidered. The technique of edge detection consists of determining the points of a digital image whose intensity changes sharply. This changes are due to the discontinuities of the orientation on a surface for example. A well known method of edge detection is the Difference of Gaussians (DoG). The method consists of subtracting two Gaussians, where a kernel has a standard deviation smaller than the previous one. The convolution between the subtraction of kernels and the input image results in the edge detection of this image. This paper introduces a method of extracting edges using DoG with kernels based on the q-Gaussian probability distribution, derived from the q-statistic proposed by Constantino Tsallis. To demonstrate the method's potential, we compare the introduce...

  8. Accelerated SPECT Monte Carlo Simulation Using Multiple Projection Sampling and Convolution-Based Forced Detection

    Science.gov (United States)

    Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.

    2010-01-01

    Monte Carlo (MC) is a well-utilized tool for simulating photon transport in single photon emission computed tomography (SPECT) due to its ability to accurately model physical processes of photon transport. As a consequence of this accuracy, it suffers from a relatively low detection efficiency and long computation time. One technique used to improve the speed of MC modeling is the effective and well-established variance reduction technique (VRT) known as forced detection (FD). With this method, photons are followed as they traverse the object under study but are then forced to travel in the direction of the detector surface, whereby they are detected at a single detector location. Another method, called convolution-based forced detection (CFD), is based on the fundamental idea of FD with the exception that detected photons are detected at multiple detector locations and determined with a distance-dependent blurring kernel. In order to further increase the speed of MC, a method named multiple projection convolution-based forced detection (MP-CFD) is presented. Rather than forcing photons to hit a single detector, the MP-CFD method follows the photon transport through the object but then, at each scatter site, forces the photon to interact with a number of detectors at a variety of angles surrounding the object. This way, it is possible to simulate all the projection images of a SPECT simulation in parallel, rather than as independent projections. The result of this is vastly improved simulation time as much of the computation load of simulating photon transport through the object is done only once for all projection angles. The results of the proposed MP-CFD method agrees well with the experimental data in measurements of point spread function (PSF), producing a correlation coefficient (r2) of 0.99 compared to experimental data. The speed of MP-CFD is shown to be about 60 times faster than a regular forced detection MC program with similar results. PMID:20811587

  9. Molecular hydrodynamics from memory kernels

    CERN Document Server

    Lesnicki, Dominika; Carof, Antoine; Rotenberg, Benjamin

    2016-01-01

    The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as $t^{-3/2}$. We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, at odds with incompressible hydrodynamics predictions. We finally discuss the various contributions to the friction, the associated time scales and the cross-over between the molecular and hydrodynamic regimes upon increasing the solute radius.

  10. Hilbertian kernels and spline functions

    CERN Document Server

    Atteia, M

    1992-01-01

    In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.

  11. Boosted learned kernels for data-driven vesselness measure

    Science.gov (United States)

    Grisan, E.

    2017-03-01

    Common vessel centerline extraction methods rely on the computation of a measure providing the likeness of the local appearance of the data to a curvilinear tube-like structure. The most popular techniques rely on empirically designed (hand crafted) measurements as the widely used Hessian vesselness, the recent oriented flux tubeness or filters (e.g. the Gaussian matched filter) that are developed to respond to local features, without exploiting any context information nor the rich structural information embedded in the data. At variance with the previously proposed methods, we propose a completely data-driven approach for learning a vesselness measure from expert-annotated dataset. For each data point (voxel or pixel), we extract the intensity values in a neighborhood region, and estimate the discriminative convolutional kernel yielding a positive response for vessel data and negative response for non-vessel data. The process is iterated within a boosting framework, providing a set of linear filters, whose combined response is the learned vesselness measure. We show the results of the general-use proposed method on the DRIVE retinal images dataset, comparing its performance against the hessian-based vesselness, oriented flux antisymmetry tubeness, and vesselness learned with a probabilistic boosting tree or with a regression tree. We demonstrate the superiority of our approach that yields a vessel detection accuracy of 0.95, with respect to 0.92 (hessian), 0.90 (oriented flux) and 0.85 (boosting tree).

  12. Multipath Convolutional-Recursive Neural Networks for Object Recognition

    OpenAIRE

    2014-01-01

    Part 8: Pattern Recognition; International audience; Extracting good representations from images is essential for many computer vision tasks. While progress in deep learning shows the importance of learning hierarchical features, it is also important to learn features through multiple paths. This paper presents Multipath Convolutional-Recursive Neural Networks(M-CRNNs), a novel scheme which aims to learn image features from multiple paths using models based on combination of convolutional and...

  13. Approximation of integral operators using product-convolution expansions

    OpenAIRE

    Escande, Paul; Weiss, Pierre

    2016-01-01

    We consider a class of linear integral operators with impulse responses varying regularly in time or space. These operators appear in a large number of applications ranging from signal/image processing to biology. Evaluating their action on functions is a computationally intensive problem necessary for many practical problems. We analyze a technique called product-convolution expansion: the operator is locally approximated by a convolution, allowing to design fast numerical algorithms ba...

  14. Approximation of integral operators using convolution-product expansions

    OpenAIRE

    Escande, Paul; Weiss, Pierre

    2016-01-01

    We consider a class of linear integral operators with impulse responses varying regularly in time or space. These operators appear in a large number of applications ranging from signal/image processing to biology. Evaluating their action on functions is a computation-ally intensive problem necessary for many practical problems. We analyze a technique called convolution-product expansion: the operator is locally approximated by a convolution, allowing to design fast numerical algorithms based ...

  15. Two-dimensional Block of Spatial Convolution Algorithm and Simulation

    OpenAIRE

    Mussa Mohamed Ahmed

    2012-01-01

    This paper proposes an algorithm based on sub image-segmentation strategy. The proposed scheme divides a grayscale image into overlapped 6×6 blocks each of which is segmented into four small 3x3 non-overlapped sub-images. A new spatial approach for efficiently computing 2-dimensional linear convolution or cross-correlation between suitable flipped and fixed filter coefficients (sub image for cross-correlation) and corresponding input sub image is presented. Computation of convolution is itera...

  16. Traffic sign recognition with deep convolutional neural networks

    OpenAIRE

    KARAMATIĆ, BORIS

    2016-01-01

    The problem of detection and recognition of traffic signs is becoming an important problem when it comes to the development of self driving cars and advanced driver assistance systems. In this thesis we will develop a system for detection and recognition of traffic signs. For the problem of detection we will use aggregate channel features and for the problem of recognition we will use a deep convolutional neural network. We will describe how convolutional neural networks work, how they are co...

  17. Multiscale Convolutional Neural Networks for Hand Detection

    Directory of Open Access Journals (Sweden)

    Shiyang Yan

    2017-01-01

    Full Text Available Unconstrained hand detection in still images plays an important role in many hand-related vision problems, for example, hand tracking, gesture analysis, human action recognition and human-machine interaction, and sign language recognition. Although hand detection has been extensively studied for decades, it is still a challenging task with many problems to be tackled. The contributing factors for this complexity include heavy occlusion, low resolution, varying illumination conditions, different hand gestures, and the complex interactions between hands and objects or other hands. In this paper, we propose a multiscale deep learning model for unconstrained hand detection in still images. Deep learning models, and deep convolutional neural networks (CNNs in particular, have achieved state-of-the-art performances in many vision benchmarks. Developed from the region-based CNN (R-CNN model, we propose a hand detection scheme based on candidate regions generated by a generic region proposal algorithm, followed by multiscale information fusion from the popular VGG16 model. Two benchmark datasets were applied to validate the proposed method, namely, the Oxford Hand Detection Dataset and the VIVA Hand Detection Challenge. We achieved state-of-the-art results on the Oxford Hand Detection Dataset and had satisfactory performance in the VIVA Hand Detection Challenge.

  18. Metaheuristic Algorithms for Convolution Neural Network

    Science.gov (United States)

    Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738

  19. Event Discrimination using Convolutional Neural Networks

    Science.gov (United States)

    Menon, Hareesh; Hughes, Richard; Daling, Alec; Winer, Brian

    2017-01-01

    Convolutional Neural Networks (CNNs) are computational models that have been shown to be effective at classifying different types of images. We present a method to use CNNs to distinguish events involving the production of a top quark pair and a Higgs boson from events involving the production of a top quark pair and several quark and gluon jets. To do this, we generate and simulate data using MADGRAPH and DELPHES for a general purpose LHC detector at 13 TeV. We produce images using a particle flow algorithm by binning the particles geometrically based on their position in the detector and weighting the bins by the energy of each particle within each bin, and by defining channels based on particle types (charged track, neutral hadronic, neutral EM, lepton, heavy flavor). Our classification results are competitive with standard machine learning techniques. We have also looked into the classification of the substructure of the events, in a process known as scene labeling. In this context, we look for the presence of boosted objects (such as top quarks) with substructure encompassed within single jets. Preliminary results on substructure classification will be presented.

  20. Do Convolutional Neural Networks Learn Class Hierarchy?

    Science.gov (United States)

    Alsallakh, Bilal; Jourabloo, Amin; Ye, Mao; Liu, Xiaoming; Ren, Liu

    2017-08-29

    Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data.

  1. Differentiable Kernels in Generalized Matrix Learning Vector Quantization

    NARCIS (Netherlands)

    Kästner, M.; Nebel, D.; Riedel, M.; Biehl, M.; Villmann, T.

    2013-01-01

    In the present paper we investigate the application of differentiable kernel for generalized matrix learning vector quantization as an alternative kernel-based classifier, which additionally provides classification dependent data visualization. We show that the concept of differentiable kernels allo

  2. Kernel current source density method.

    Science.gov (United States)

    Potworowski, Jan; Jakuczun, Wit; Lȩski, Szymon; Wójcik, Daniel

    2012-02-01

    Local field potentials (LFP), the low-frequency part of extracellular electrical recordings, are a measure of the neural activity reflecting dendritic processing of synaptic inputs to neuronal populations. To localize synaptic dynamics, it is convenient, whenever possible, to estimate the density of transmembrane current sources (CSD) generating the LFP. In this work, we propose a new framework, the kernel current source density method (kCSD), for nonparametric estimation of CSD from LFP recorded from arbitrarily distributed electrodes using kernel methods. We test specific implementations of this framework on model data measured with one-, two-, and three-dimensional multielectrode setups. We compare these methods with the traditional approach through numerical approximation of the Laplacian and with the recently developed inverse current source density methods (iCSD). We show that iCSD is a special case of kCSD. The proposed method opens up new experimental possibilities for CSD analysis from existing or new recordings on arbitrarily distributed electrodes (not necessarily on a grid), which can be obtained in extracellular recordings of single unit activity with multiple electrodes.

  3. Filtering algorithms using shiftable kernels

    CERN Document Server

    Chaudhury, Kunal Narayan

    2011-01-01

    It was recently demonstrated in [4][arxiv:1105.4204] that the non-linear bilateral filter \\cite{Tomasi} can be efficiently implemented using an O(1) or constant-time algorithm. At the heart of this algorithm was the idea of approximating the Gaussian range kernel of the bilateral filter using trigonometric functions. In this letter, we explain how the idea in [4] can be extended to few other linear and non-linear filters [18,21,2]. While some of these filters have received a lot of attention in recent years, they are known to be computationally intensive. To extend the idea in \\cite{Chaudhury2011}, we identify a central property of trigonometric functions, called shiftability, that allows us to exploit the redundancy inherent in the filtering operations. In particular, using shiftable kernels, we show how certain complex filtering can be reduced to simply that of computing the moving sum of a stack of images. Each image in the stack is obtained through an elementary pointwise transform of the input image. Thi...

  4. On uniqueness of semi-wavefronts (Diekmann-Kaper theory of a nonlinear convolution equation re-visited)

    CERN Document Server

    Aguerrea, Maitere; Trofimchuk, Sergei

    2010-01-01

    Motivated by the uniqueness problem for monostable semi-wavefronts, we propose a revised version of the Diekmann and Kaper theory of a nonlinear convolution equation. Our version of the Diekmann-Kaper theory allows 1) to consider new types of models which include nonlocal KPP type equations (with either symmetric or anisotropic dispersal), non-local lattice equations and delayed reaction-diffusion equations; 2) to incorporate the critical case (which corresponds to the slowest wavefronts) into the consideration; 3) to weaken or to remove various restrictions on kernels and nonlinearities. The results are compared with those of Schumacher (J. Reine Angew. Math. 316: 54-70, 1980), Carr and Chmaj (Proc. Amer. Math. Soc. 132: 2433-2439, 2004), and other more recent studies.

  5. Kernel parameter dependence in spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...

  6. Improving the Bandwidth Selection in Kernel Equating

    Science.gov (United States)

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  7. Ranking Support Vector Machine with Kernel Approximation.

    Science.gov (United States)

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  8. Generalized Derivative Based Kernelized Learning Vector Quantization

    NARCIS (Netherlands)

    Schleif, Frank-Michael; Villmann, Thomas; Hammer, Barbara; Schneider, Petra; Biehl, Michael; Fyfe, Colin; Tino, Peter; Charles, Darryl; Garcia-Osoro, Cesar; Yin, Hujun

    2010-01-01

    We derive a novel derivative based version of kernelized Generalized Learning Vector Quantization (KGLVQ) as an effective, easy to interpret, prototype based and kernelized classifier. It is called D-KGLVQ and we provide generalization error bounds, experimental results on real world data, showing t

  9. PALM KERNEL SHELL AS AGGREGATE FOR LIGHT

    African Journals Online (AJOL)

    of cement, sand, gravel andpalm kernel shells respectively gave the highest compressive strength of ... Keywords: Aggregate, Cement, Concrete, Sand, Palm Kernel Shell. ... delivered to the jOb Slte in a plastic ... structures, breakwaters, piers and docks .... related to cement content at a .... sheet and the summary is shown.

  10. Panel data specifications in nonparametric kernel regression

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  11. Kernel Model Applied in Kernel Direct Discriminant Analysis for the Recognition of Face with Nonlinear Variations

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A kernel-based discriminant analysis method called kernel direct discriminant analysis is employed, which combines the merit of direct linear discriminant analysis with that of kernel trick. In order to demonstrate its better robustness to the complex and nonlinear variations of real face images, such as illumination, facial expression, scale and pose variations, experiments are carried out on the Olivetti Research Laboratory, Yale and self-built face databases. The results indicate that in contrast to kernel principal component analysis and kernel linear discriminant analysis, the method can achieve lower (7%) error rate using only a very small set of features. Furthermore, a new corrected kernel model is proposed to improve the recognition performance. Experimental results confirm its superiority (1% in terms of recognition rate) to other polynomial kernel models.

  12. Parameter-Free Spectral Kernel Learning

    CERN Document Server

    Mao, Qi

    2012-01-01

    Due to the growing ubiquity of unlabeled data, learning with unlabeled data is attracting increasing attention in machine learning. In this paper, we propose a novel semi-supervised kernel learning method which can seamlessly combine manifold structure of unlabeled data and Regularized Least-Squares (RLS) to learn a new kernel. Interestingly, the new kernel matrix can be obtained analytically with the use of spectral decomposition of graph Laplacian matrix. Hence, the proposed algorithm does not require any numerical optimization solvers. Moreover, by maximizing kernel target alignment on labeled data, we can also learn model parameters automatically with a closed-form solution. For a given graph Laplacian matrix, our proposed method does not need to tune any model parameter including the tradeoff parameter in RLS and the balance parameter for unlabeled data. Extensive experiments on ten benchmark datasets show that our proposed two-stage parameter-free spectral kernel learning algorithm can obtain comparable...

  13. Deep-learning convolution neural network for computer-aided detection of microcalcifications in digital breast tomosynthesis

    Science.gov (United States)

    Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Cha, Kenny; Helvie, Mark A.

    2016-03-01

    A deep learning convolution neural network (DLCNN) was designed to differentiate microcalcification candidates detected during the prescreening stage as true calcifications or false positives in a computer-aided detection (CAD) system for clustered microcalcifications. The microcalcification candidates were extracted from the planar projection image generated from the digital breast tomosynthesis volume reconstructed by a multiscale bilateral filtering regularized simultaneous algebraic reconstruction technique. For training and testing of the DLCNN, true microcalcifications are manually labeled for the data sets and false positives were obtained from the candidate objects identified by the CAD system at prescreening after exclusion of the true microcalcifications. The DLCNN architecture was selected by varying the number of filters, filter kernel sizes and gradient computation parameter in the convolution layers, resulting in a parameter space of 216 combinations. The exhaustive grid search method was used to select an optimal architecture within the parameter space studied, guided by the area under the receiver operating characteristic curve (AUC) as a figure-of-merit. The effects of varying different categories of the parameter space were analyzed. The selected DLCNN was compared with our previously designed CNN architecture for the test set. The AUCs of the CNN and DLCNN was 0.89 and 0.93, respectively. The improvement was statistically significant (p < 0.05).

  14. Scatter reduction for grid-less mammography using the convolution-based image post-processing technique

    Science.gov (United States)

    Marimón, Elena; Nait-Charif, Hammadi; Khan, Asmar; Marsden, Philip A.; Diaz, Oliver

    2017-03-01

    X-ray Mammography examinations are highly affected by scattered radiation, as it degrades the quality of the image and complicates the diagnosis process. Anti-scatter grids are currently used in planar mammography examinations as the standard physical scattering reduction technique. This method has been found to be inefficient, as it increases the dose delivered to the patient, does not remove all the scattered radiation and increases the price of the equipment. Alternative scattering reduction methods, based on post-processing algorithms, are being investigated to substitute anti-scatter grids. Methods such as the convolution-based scatter estimation have lately become attractive as they are quicker and more flexible than pure Monte Carlo (MC) simulations. In this study we make use of this specific method, which is based on the premise that the scatter in the system is spatially diffuse, thus it can be approximated by a two-dimensional low-pass convolution filter of the primary image. This algorithm uses the narrow pencil beam method to obtain the scatter kernel used to convolve an image, acquired without anti-scatter grid. The results obtained show an image quality comparable, in the worst case, to the grid image, in terms of uniformity and contrast to noise ratio. Further improvement is expected when using clinically-representative phantoms.

  15. Colonoscopic polyp detection using convolutional neural networks

    Science.gov (United States)

    Park, Sun Young; Sargent, Dusty

    2016-03-01

    Computer aided diagnosis (CAD) systems for medical image analysis rely on accurate and efficient feature extraction methods. Regardless of which type of classifier is used, the results will be limited if the input features are not diagnostically relevant and do not properly discriminate between the different classes of images. Thus, a large amount of research has been dedicated to creating feature sets that capture the salient features that physicians are able to observe in the images. Successful feature extraction reduces the semantic gap between the physician's interpretation and the computer representation of images, and helps to reduce the variability in diagnosis between physicians. Due to the complexity of many medical image classification tasks, feature extraction for each problem often requires domainspecific knowledge and a carefully constructed feature set for the specific type of images being classified. In this paper, we describe a method for automatic diagnostic feature extraction from colonoscopy images that may have general application and require a lower level of domain-specific knowledge. The work in this paper expands on our previous CAD algorithm for detecting polyps in colonoscopy video. In that work, we applied an eigenimage model to extract features representing polyps, normal tissue, diverticula, etc. from colonoscopy videos taken from various viewing angles and imaging conditions. Classification was performed using a conditional random field (CRF) model that accounted for the spatial and temporal adjacency relationships present in colonoscopy video. In this paper, we replace the eigenimage feature descriptor with features extracted from a convolutional neural network (CNN) trained to recognize the same image types in colonoscopy video. The CNN-derived features show greater invariance to viewing angles and image quality factors when compared to the eigenimage model. The CNN features are used as input to the CRF classifier as before. We report

  16. Noise-enhanced convolutional neural networks.

    Science.gov (United States)

    Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart

    2016-06-01

    Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives.

  17. Introducing single-crystal scattering and optical potentials into MCNPX: Predicting neutron emission from a convoluted moderator

    Science.gov (United States)

    Gallmeier, F. X.; Iverson, E. B.; Lu, W.; Baxter, D. V.; Muhrer, G.; Ansell, S.

    2016-04-01

    Neutron transport simulation codes are indispensable tools for the design and construction of modern neutron scattering facilities and instrumentation. Recently, it has become increasingly clear that some neutron instrumentation has started to exploit physics that is not well-modeled by the existing codes. In particular, the transport of neutrons through single crystals and across interfaces in MCNP(X), Geant4, and other codes ignores scattering from oriented crystals and refractive effects, and yet these are essential phenomena for the performance of monochromators and ultra-cold neutron transport respectively (to mention but two examples). In light of these developments, we have extended the MCNPX code to include a single-crystal neutron scattering model and neutron reflection/refraction physics. We have also generated silicon scattering kernels for single crystals of definable orientation. As a first test of these new tools, we have chosen to model the recently developed convoluted moderator concept, in which a moderating material is interleaved with layers of perfect crystals to provide an exit path for neutrons moderated to energies below the crystal's Bragg cut-off from locations deep within the moderator. Studies of simple cylindrical convoluted moderator systems of 100 mm diameter and composed of polyethylene and single crystal silicon were performed with the upgraded MCNPX code and reproduced the magnitude of effects seen in experiments compared to homogeneous moderator systems. Applying different material properties for refraction and reflection, and by replacing the silicon in the models with voids, we show that the emission enhancements seen in recent experiments are primarily caused by the transparency of the silicon and void layers. Finally we simulated the convoluted moderator experiments described by Iverson et al. and found satisfactory agreement between the measurements and the simulations performed with the tools we have developed.

  18. Introducing single-crystal scattering and optical potentials into MCNPX: Predicting neutron emission from a convoluted moderator

    Energy Technology Data Exchange (ETDEWEB)

    Gallmeier, F.X., E-mail: gallmeierfz@ornl.gov [Spallation Neutron Source, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Iverson, E.B.; Lu, W. [Spallation Neutron Source, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Baxter, D.V. [Center for the Exploration of Energy and Matter, Indiana University, Bloomington, IN 47408 (United States); Muhrer, G.; Ansell, S. [European Spallation Source, ESS AB, Lund (Sweden)

    2016-04-01

    Neutron transport simulation codes are indispensable tools for the design and construction of modern neutron scattering facilities and instrumentation. Recently, it has become increasingly clear that some neutron instrumentation has started to exploit physics that is not well-modeled by the existing codes. In particular, the transport of neutrons through single crystals and across interfaces in MCNP(X), Geant4, and other codes ignores scattering from oriented crystals and refractive effects, and yet these are essential phenomena for the performance of monochromators and ultra-cold neutron transport respectively (to mention but two examples). In light of these developments, we have extended the MCNPX code to include a single-crystal neutron scattering model and neutron reflection/refraction physics. We have also generated silicon scattering kernels for single crystals of definable orientation. As a first test of these new tools, we have chosen to model the recently developed convoluted moderator concept, in which a moderating material is interleaved with layers of perfect crystals to provide an exit path for neutrons moderated to energies below the crystal's Bragg cut–off from locations deep within the moderator. Studies of simple cylindrical convoluted moderator systems of 100 mm diameter and composed of polyethylene and single crystal silicon were performed with the upgraded MCNPX code and reproduced the magnitude of effects seen in experiments compared to homogeneous moderator systems. Applying different material properties for refraction and reflection, and by replacing the silicon in the models with voids, we show that the emission enhancements seen in recent experiments are primarily caused by the transparency of the silicon and void layers. Finally we simulated the convoluted moderator experiments described by Iverson et al. and found satisfactory agreement between the measurements and the simulations performed with the tools we have developed.

  19. Evaluation of convolutional neural networks for visual recognition.

    Science.gov (United States)

    Nebauer, C

    1998-01-01

    Convolutional neural networks provide an efficient method to constrain the complexity of feedforward neural networks by weight sharing and restriction to local connections. This network topology has been applied in particular to image classification when sophisticated preprocessing is to be avoided and raw images are to be classified directly. In this paper two variations of convolutional networks--neocognitron and a modification of neocognitron--are compared with classifiers based on fully connected feedforward layers (i.e., multilayer perceptron, nearest neighbor classifier, auto-encoding network) with respect to their visual recognition performance. Beside the original neocognitron a modification of the neocognitron is proposed which combines neurons from perceptron with the localized network structure of neocognitron. Instead of training convolutional networks by time-consuming error backpropagation, in this work a modular procedure is applied whereby layers are trained sequentially from the input to the output layer in order to recognize features of increasing complexity. For a quantitative experimental comparison with standard classifiers two very different recognition tasks have been chosen: handwritten digit recognition and face recognition. In the first example on handwritten digit recognition the generalization of convolutional networks is compared to fully connected networks. In several experiments the influence of variations of position, size, and orientation of digits is determined and the relation between training sample size and validation error is observed. In the second example recognition of human faces is investigated under constrained and variable conditions with respect to face orientation and illumination and the limitations of convolutional networks are discussed.

  20. Brain and art: illustrations of the cerebral convolutions. A review.

    Science.gov (United States)

    Lazić, D; Marinković, S; Tomić, I; Mitrović, D; Starčević, A; Milić, I; Grujičić, M; Marković, B

    2014-08-01

    Aesthetics and functional significance of the cerebral cortical relief gave us the idea to find out how often the convolutions are presented in fine art, and in which techniques, conceptual meaning and pathophysiological aspect. We examined 27,614 art works created by 2,856 authors and presented in art literature, and in Google images search. The cerebral gyri were shown in 0.85% of the art works created by 2.35% of the authors. The concept of the brain was first mentioned in ancient Egypt some 3,700 years ago. The first artistic drawing of the convolutions was made by Leonardo da Vinci, and the first colour picture by an unknown Italian author. Rembrandt van Rijn was the first to paint the gyri. Dozens of modern authors, who are professional artists, medical experts or designers, presented the cerebralc onvolutions in drawings, paintings, digital works or sculptures, with various aesthetic, symbolic and metaphorical connotation. Some artistic compositions and natural forms show a gyral pattern. The convolutions, whose cortical layers enable the cognitive functions, can be affected by various disorders. Some artists suffered from those disorders, and some others presented them in their artworks. The cerebral convolutions or gyri, thanks to their extensive cortical mantle, are the specific morphological basis for the human mind, but also the structures with their own aesthetics. Contemporary authors relatively often depictor model the cerebral convolutions, either from the aesthetic or conceptual aspect. In this way, they make a connection between the neuroscience and fineart.

  1. Heat-kernel approach for scattering

    CERN Document Server

    Li, Wen-Du

    2015-01-01

    An approach for solving scattering problems, based on two quantum field theory methods, the heat kernel method and the scattering spectral method, is constructed. This approach has a special advantage: it is not only one single approach; it is indeed a set of approaches for solving scattering problems. Concretely, we build a bridge between a scattering problem and the heat kernel method, so that each method of calculating heat kernels can be converted into a method of solving a scattering problem. As applications, we construct two approaches for solving scattering problems based on two heat-kernel expansions: the Seeley-DeWitt expansion and the covariant perturbation theory. In order to apply the heat kernel method to scattering problems, we also calculate two off-diagonal heat-kernel expansions in the frames of the Seeley-DeWitt expansion and the covariant perturbation theory, respectively. Moreover, as an alternative application of the relation between heat kernels and partial-wave phase shifts presented in...

  2. Ideal regularization for learning kernels from labels.

    Science.gov (United States)

    Pan, Binbin; Lai, Jianhuang; Shen, Lixin

    2014-08-01

    In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently.

  3. SCENE SEMANTIC SEGMENTATION FROM INDOOR RGB-D IMAGES USING ENCODE-DECODER FULLY CONVOLUTIONAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Z. Wang

    2017-09-01

    Full Text Available With increasing attention for the indoor environment and the development of low-cost RGB-D sensors, indoor RGB-D images are easily acquired. However, scene semantic segmentation is still an open area, which restricts indoor applications. The depth information can help to distinguish the regions which are difficult to be segmented out from the RGB images with similar color or texture in the indoor scenes. How to utilize the depth information is the key problem of semantic segmentation for RGB-D images. In this paper, we propose an Encode-Decoder Fully Convolutional Networks for RGB-D image classification. We use Multiple Kernel Maximum Mean Discrepancy (MK-MMD as a distance measure to find common and special features of RGB and D images in the network to enhance performance of classification automatically. To explore better methods of applying MMD, we designed two strategies; the first calculates MMD for each feature map, and the other calculates MMD for whole batch features. Based on the result of classification, we use the full connect CRFs for the semantic segmentation. The experimental results show that our method can achieve a good performance on indoor RGB-D image semantic segmentation.

  4. Cygrid: A fast Cython-powered convolution-based gridding module for Python

    CERN Document Server

    Winkel, B; Flöer, L

    2016-01-01

    Data gridding is a common task in astronomy and many other science disciplines. It refers to the resampling of irregularly sampled data to a regular grid. We present cygrid, a library module for the general purpose programming language Python. Cygrid can be used to resample data to any collection of target coordinates, although its typical application involves FITS maps or data cubes. The FITS world coordinate system standard is supported. The regridding algorithm is based on the convolution of the original samples with a kernel of arbitrary shape. We introduce a lookup table scheme that allows us to parallelize the gridding and combine it with the HEALPix tessellation of the sphere for fast neighbor searches. We show that for $n$ input data points, cygrids runtime scales between O(n) and O(n log n) and analyze the performance gain that is achieved using multiple CPU cores. We also compare the gridding speed with other techniques, such as nearest-neighbor, and linear and cubic spline interpolation. Cygrid is ...

  5. Cygrid: A fast Cython-powered convolution-based gridding module for Python

    Science.gov (United States)

    Winkel, B.; Lenz, D.; Flöer, L.

    2016-06-01

    Context. Data gridding is a common task in astronomy and many other science disciplines. It refers to the resampling of irregularly sampled data to a regular grid. Aims: We present cygrid, a library module for the general purpose programming language Python. Cygrid can be used to resample data to any collection of target coordinates, although its typical application involves FITS maps or data cubes. The FITS world coordinate system standard is supported. Methods: The regridding algorithm is based on the convolution of the original samples with a kernel of arbitrary shape. We introduce a lookup table scheme that allows us to parallelize the gridding and combine it with the HEALPix tessellation of the sphere for fast neighbor searches. Results: We show that for n input data points, cygrids runtime scales between O(n) and O(nlog n) and analyze the performance gain that is achieved using multiple CPU cores. We also compare the gridding speed with other techniques, such as nearest-neighbor, and linear and cubic spline interpolation. Conclusions: Cygrid is a very fast and versatile gridding library that significantly outperforms other third-party Python modules, such as the linear and cubic spline interpolation provided by SciPy. http://https://github.com/bwinkel/cygrid

  6. Scene Semantic Segmentation from Indoor Rgb-D Images Using Encode-Decoder Fully Convolutional Networks

    Science.gov (United States)

    Wang, Z.; Li, T.; Pan, L.; Kang, Z.

    2017-09-01

    With increasing attention for the indoor environment and the development of low-cost RGB-D sensors, indoor RGB-D images are easily acquired. However, scene semantic segmentation is still an open area, which restricts indoor applications. The depth information can help to distinguish the regions which are difficult to be segmented out from the RGB images with similar color or texture in the indoor scenes. How to utilize the depth information is the key problem of semantic segmentation for RGB-D images. In this paper, we propose an Encode-Decoder Fully Convolutional Networks for RGB-D image classification. We use Multiple Kernel Maximum Mean Discrepancy (MK-MMD) as a distance measure to find common and special features of RGB and D images in the network to enhance performance of classification automatically. To explore better methods of applying MMD, we designed two strategies; the first calculates MMD for each feature map, and the other calculates MMD for whole batch features. Based on the result of classification, we use the full connect CRFs for the semantic segmentation. The experimental results show that our method can achieve a good performance on indoor RGB-D image semantic segmentation.

  7. Two projects in theoretical neuroscience: A convolution-based metric for neural membrane potentials and a combinatorial connectionist semantic network method

    Science.gov (United States)

    Evans, Garrett Nolan

    In this work, I present two projects that both contribute to the aim of discovering how intelligence manifests in the brain. The first project is a method for analyzing recorded neural signals, which takes the form of a convolution-based metric on neural membrane potential recordings. Relying only on integral and algebraic operations, the metric compares the timing and number of spikes within recordings as well as the recordings' subthreshold features: summarizing differences in these with a single "distance" between the recordings. Like van Rossum's (2001) metric for spike trains, the metric is based on a convolution operation that it performs on the input data. The kernel used for the convolution is carefully chosen such that it produces a desirable frequency space response and, unlike van Rossum's kernel, causes the metric to be first order both in differences between nearby spike times and in differences between same-time membrane potential values: an important trait. The second project is a combinatorial syntax method for connectionist semantic network encoding. Combinatorial syntax has been a point on which those who support a symbol-processing view of intelligent processing and those who favor a connectionist view have had difficulty seeing eye-to-eye. Symbol-processing theorists have persuasively argued that combinatorial syntax is necessary for certain intelligent mental operations, such as reasoning by analogy. Connectionists have focused on the versatility and adaptability offered by self-organizing networks of simple processing units. With this project, I show that there is a way to reconcile the two perspectives and to ascribe a combinatorial syntax to a connectionist network. The critical principle is to interpret nodes, or units, in the connectionist network as bound integrations of the interpretations for nodes that they share links with. Nodes need not correspond exactly to neurons and may correspond instead to distributed sets, or assemblies, of

  8. Kernel score statistic for dependent data.

    Science.gov (United States)

    Malzahn, Dörthe; Friedrichs, Stefanie; Rosenberger, Albert; Bickeböller, Heike

    2014-01-01

    The kernel score statistic is a global covariance component test over a set of genetic markers. It provides a flexible modeling framework and does not collapse marker information. We generalize the kernel score statistic to allow for familial dependencies and to adjust for random confounder effects. With this extension, we adjust our analysis of real and simulated baseline systolic blood pressure for polygenic familial background. We find that the kernel score test gains appreciably in power through the use of sequencing compared to tag-single-nucleotide polymorphisms for very rare single nucleotide polymorphisms with <1% minor allele frequency.

  9. Kernel-based Maximum Entropy Clustering

    Institute of Scientific and Technical Information of China (English)

    JIANG Wei; QU Jiao; LI Benxi

    2007-01-01

    With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.

  10. Kernel adaptive filtering a comprehensive introduction

    CERN Document Server

    Liu, Weifeng; Haykin, Simon

    2010-01-01

    Online learning from a signal processing perspective There is increased interest in kernel learning algorithms in neural networks and a growing need for nonlinear adaptive algorithms in advanced signal processing, communications, and controls. Kernel Adaptive Filtering is the first book to present a comprehensive, unifying introduction to online learning algorithms in reproducing kernel Hilbert spaces. Based on research being conducted in the Computational Neuro-Engineering Laboratory at the University of Florida and in the Cognitive Systems Laboratory at McMaster University, O

  11. Multiple Operator-valued Kernel Learning

    CERN Document Server

    Kadri, Hachem; Bach, Francis; Preux, Philippe

    2012-01-01

    This paper addresses the problem of learning a finite linear combination of operator-valued kernels. We study this problem in the case of kernel ridge regression for functional responses with a lr-norm constraint on the combination coefficients. We propose a multiple operator-valued kernel learning algorithm based on solving a system of linear operator equations by using a block coordinate descent procedure. We experimentally validate our approach on a functional regression task in the context of finger movement prediction in Brain-Computer Interface (BCI).

  12. Polynomial Kernelizations for $\\MINF_1$ and $\\MNP$

    CERN Document Server

    Kratsch, Stefan

    2009-01-01

    The relation of constant-factor approximability to fixed-parameter tractability and kernelization is a long-standing open question. We prove that two large classes of constant-factor approximable problems, namely $\\MINF_1$ and $\\MNP$, including the well-known subclass $\\MSNP$, admit polynomial kernelizations for their natural decision versions. This extends results of Cai and Chen (JCSS 1997), stating that the standard parameterizations of problems in $\\MSNP$ and $\\MINF_1$ are fixed-parameter tractable, and complements recent research on problems that do not admit polynomial kernelizations (Bodlaender et al. ICALP 2008).

  13. Glaucoma detection based on deep convolutional neural network.

    Science.gov (United States)

    Xiangyu Chen; Yanwu Xu; Damon Wing Kee Wong; Tien Yin Wong; Jiang Liu

    2015-08-01

    Glaucoma is a chronic and irreversible eye disease, which leads to deterioration in vision and quality of life. In this paper, we develop a deep learning (DL) architecture with convolutional neural network for automated glaucoma diagnosis. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images to discriminate between glaucoma and non-glaucoma patterns for diagnostic decisions. The proposed DL architecture contains six learned layers: four convolutional layers and two fully-connected layers. Dropout and data augmentation strategies are adopted to further boost the performance of glaucoma diagnosis. Extensive experiments are performed on the ORIGA and SCES datasets. The results show area under curve (AUC) of the receiver operating characteristic curve in glaucoma detection at 0.831 and 0.887 in the two databases, much better than state-of-the-art algorithms. The method could be used for glaucoma detection.

  14. Two dimensional convolute integers for machine vision and image recognition

    Science.gov (United States)

    Edwards, Thomas R.

    1988-01-01

    Machine vision and image recognition require sophisticated image processing prior to the application of Artificial Intelligence. Two Dimensional Convolute Integer Technology is an innovative mathematical approach for addressing machine vision and image recognition. This new technology generates a family of digital operators for addressing optical images and related two dimensional data sets. The operators are regression generated, integer valued, zero phase shifting, convoluting, frequency sensitive, two dimensional low pass, high pass and band pass filters that are mathematically equivalent to surface fitted partial derivatives. These operators are applied non-recursively either as classical convolutions (replacement point values), interstitial point generators (bandwidth broadening or resolution enhancement), or as missing value calculators (compensation for dead array element values). These operators show frequency sensitive feature selection scale invariant properties. Such tasks as boundary/edge enhancement and noise or small size pixel disturbance removal can readily be accomplished. For feature selection tight band pass operators are essential. Results from test cases are given.

  15. Improving displayed resolution in convolution reconstruction of digital holograms

    Institute of Scientific and Technical Information of China (English)

    FAN Qi; ZHAO Jian-lin; ZHANG Yan-cao; WANG Jun; DI Jiang-lei

    2006-01-01

    In digital holographic microscopy,when the object is placed near the CCD,the Fresnel approximation is no longer valid and the convolution approach has to be applied.With this approach,the sampling spacing of the reconstructed image plane is equal to the pixel size of the CCD.If the lateral resolution of the reconstructed image is higher than that of the CCD,Nyquist sampling criterion is violated and aliasing errors will be introduced.In this Letter,a new method is proposed to solve this problem by investigating convolution reconstruction of digital holograms.By appending enough zeros to the angular spectrum between the two FFT's in convolution reconstruction of digital holograms,the displayed resolution of the reconstructed image can be improved.Experimental results show a good agreement with theoretical analysis.

  16. Extension of Wirtinger's Calculus in Reproducing Kernel Hilbert Spaces and the Complex Kernel LMS

    CERN Document Server

    Bouboulis, Pantelis

    2010-01-01

    Over the last decade, kernel methods for nonlinear processing have successfully been used in the machine learning community. The primary mathematical tool employed in these methods is the notion of the Reproducing Kernel Hilbert Space. However, so far, the emphasis has been on batch techniques. It is only recently, that online techniques have been considered in the context of adaptive signal processing tasks. Moreover, these efforts have only been focussed on and real valued data sequences. To the best of our knowledge, no kernel-based strategy has been developed, so far, that is able to deal with complex valued signals. In this paper, we present a general framework to attack the problem of adaptive filtering of complex signals, using either real reproducing kernels, taking advantage of a technique called \\textit{complexification} of real RKHSs, or complex reproducing kernels, highlighting the use of the complex gaussian kernel. In order to derive gradients of operators that need to be defined on the associat...

  17. Kernel map compression for speeding the execution of kernel-based methods.

    Science.gov (United States)

    Arif, Omar; Vela, Patricio A

    2011-06-01

    The use of Mercer kernel methods in statistical learning theory provides for strong learning capabilities, as seen in kernel principal component analysis and support vector machines. Unfortunately, after learning, the computational complexity of execution through a kernel is of the order of the size of the training set, which is quite large for many applications. This paper proposes a two-step procedure for arriving at a compact and computationally efficient execution procedure. After learning in the kernel space, the proposed extension exploits the universal approximation capabilities of generalized radial basis function neural networks to efficiently approximate and replace the projections onto the empirical kernel map used during execution. Sample applications demonstrate significant compression of the kernel representation with graceful performance loss.

  18. The Linux kernel as flexible product-line architecture

    NARCIS (Netherlands)

    Jonge, M. de

    2002-01-01

    The Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what component they real

  19. 7 CFR 51.2296 - Three-fourths half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more...

  20. 7 CFR 981.401 - Adjusted kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel weight... kernels in excess of five percent; less shells, if applicable; less processing loss of one percent...

  1. 7 CFR 51.1441 - Half-kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  2. 7 CFR 51.1403 - Kernel color classification.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...

  3. NLO corrections to the Kernel of the BKP-equations

    Energy Technology Data Exchange (ETDEWEB)

    Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Fadin, V.S. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirskij Gosudarstvennyj Univ., Novosibirsk (Russian Federation); Lipatov, L.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg (Russian Federation); Vacca, G.P. [INFN, Sezione di Bologna (Italy)

    2012-10-02

    We present results for the NLO kernel of the BKP equations for composite states of three reggeized gluons in the Odderon channel, both in QCD and in N=4 SYM. The NLO kernel consists of the NLO BFKL kernel in the color octet representation and the connected 3{yields}3 kernel, computed in the tree approximation.

  4. Reproducing Kernel for D2(Ω, ρ) and Metric Induced by Reproducing Kernel

    Institute of Scientific and Technical Information of China (English)

    ZHAO Zhen Gang

    2009-01-01

    An important property of the reproducing kernel of D2(Ω, ρ) is obtained and the reproducing kernels for D2(Ω, ρ) are calculated when Ω = Bn × Bn and ρ are some special functions. A reproducing kernel is used to construct a semi-positive definite matrix and a distance function defined on Ω×Ω. An inequality is obtained about the distance function and the pseudodistance induced by the matrix.

  5. The Existence of Strongly-MDS Convolutional Codes

    CERN Document Server

    Hutchinson, Ryan

    2008-01-01

    It is known that maximum distance separable and maximum distance profile convolutional codes exist over large enough finite fields of any characteristic for all parameters $(n,k,\\delta)$. It has been conjectured that the same is true for convolutional codes that are strongly maximum distance separable. Using methods from linear systems theory, we resolve this conjecture by showing that, over a large enough finite field of any characteristic, codes which are simultaneously maximum distance profile and strongly maximum distance separable exist for all parameters $(n,k,\\delta)$.

  6. Convolutional cylinder-type block-circulant cycle codes

    Directory of Open Access Journals (Sweden)

    Mohammad Gholami

    2013-06-01

    Full Text Available In this paper, we consider a class of column-weight two quasi-cyclic low-density paritycheck codes in which the girth can be large enough, as an arbitrary multiple of 8. Then we devote a convolutional form to these codes, such that their generator matrix can be obtained by elementary row and column operations on the parity-check matrix. Finally, we show that the free distance of the convolutional codes is equal to the minimum distance of their block counterparts.

  7. Inferring low-dimensional microstructure representations using convolutional neural networks

    CERN Document Server

    Lubbers, Nicholas; Barros, Kipton

    2016-01-01

    We apply recent advances in machine learning and computer vision to a central problem in materials informatics: The statistical representation of microstructural images. We use activations in a pre-trained convolutional neural network to provide a high-dimensional characterization of a set of synthetic microstructural images. Next, we use manifold learning to obtain a low-dimensional embedding of this statistical characterization. We show that the low-dimensional embedding extracts the parameters used to generate the images. According to a variety of metrics, the convolutional neural network method yields dramatically better embeddings than the analogous method derived from two-point correlations alone.

  8. Detection of phase transition via convolutional neural network

    CERN Document Server

    Tanaka, Akinori

    2016-01-01

    We design a Convolutional Neural Network (CNN) which studies correlation between discretized inverse temperature and spin configuration of 2D Ising model and show that it can find a feature of the phase transition without teaching any a priori information for it. We also define a new order parameter via the CNN and show that it provides well approximated critical inverse temperature. In addition, we compare the activation functions for convolution layer and find that the Rectified Linear Unit (ReLU) is important to detect the phase transition of 2D Ising model.

  9. Discriminant Kernel Assignment for Image Coding.

    Science.gov (United States)

    Deng, Yue; Zhao, Yanyu; Ren, Zhiquan; Kong, Youyong; Bao, Feng; Dai, Qionghai

    2017-06-01

    This paper proposes discriminant kernel assignment (DKA) in the bag-of-features framework for image representation. DKA slightly modifies existing kernel assignment to learn width-variant Gaussian kernel functions to perform discriminant local feature assignment. When directly applying gradient-descent method to solve DKA, the optimization may contain multiple time-consuming reassignment implementations in iterations. Accordingly, we introduce a more practical way to locally linearize the DKA objective and the difficult task is cast as a sequence of easier ones. Since DKA only focuses on the feature assignment part, it seamlessly collaborates with other discriminative learning approaches, e.g., discriminant dictionary learning or multiple kernel learning, for even better performances. Experimental evaluations on multiple benchmark datasets verify that DKA outperforms other image assignment approaches and exhibits significant efficiency in feature coding.

  10. Multiple Kernel Spectral Regression for Dimensionality Reduction

    Directory of Open Access Journals (Sweden)

    Bing Liu

    2013-01-01

    Full Text Available Traditional manifold learning algorithms, such as locally linear embedding, Isomap, and Laplacian eigenmap, only provide the embedding results of the training samples. To solve the out-of-sample extension problem, spectral regression (SR solves the problem of learning an embedding function by establishing a regression framework, which can avoid eigen-decomposition of dense matrices. Motivated by the effectiveness of SR, we incorporate multiple kernel learning (MKL into SR for dimensionality reduction. The proposed approach (termed MKL-SR seeks an embedding function in the Reproducing Kernel Hilbert Space (RKHS induced by the multiple base kernels. An MKL-SR algorithm is proposed to improve the performance of kernel-based SR (KSR further. Furthermore, the proposed MKL-SR algorithm can be performed in the supervised, unsupervised, and semi-supervised situation. Experimental results on supervised classification and semi-supervised classification demonstrate the effectiveness and efficiency of our algorithm.

  11. Quantum kernel applications in medicinal chemistry.

    Science.gov (United States)

    Huang, Lulu; Massa, Lou

    2012-07-01

    Progress in the quantum mechanics of biological molecules is being driven by computational advances. The notion of quantum kernels can be introduced to simplify the formalism of quantum mechanics, making it especially suitable for parallel computation of very large biological molecules. The essential idea is to mathematically break large biological molecules into smaller kernels that are calculationally tractable, and then to represent the full molecule by a summation over the kernels. The accuracy of the kernel energy method (KEM) is shown by systematic application to a great variety of molecular types found in biology. These include peptides, proteins, DNA and RNA. Examples are given that explore the KEM across a variety of chemical models, and to the outer limits of energy accuracy and molecular size. KEM represents an advance in quantum biology applicable to problems in medicine and drug design.

  12. Kernel method-based fuzzy clustering algorithm

    Institute of Scientific and Technical Information of China (English)

    Wu Zhongdong; Gao Xinbo; Xie Weixin; Yu Jianping

    2005-01-01

    The fuzzy C-means clustering algorithm(FCM) to the fuzzy kernel C-means clustering algorithm(FKCM) to effectively perform cluster analysis on the diversiform structures are extended, such as non-hyperspherical data, data with noise, data with mixture of heterogeneous cluster prototypes, asymmetric data, etc. Based on the Mercer kernel, FKCM clustering algorithm is derived from FCM algorithm united with kernel method. The results of experiments with the synthetic and real data show that the FKCM clustering algorithm is universality and can effectively unsupervised analyze datasets with variform structures in contrast to FCM algorithm. It is can be imagined that kernel-based clustering algorithm is one of important research direction of fuzzy clustering analysis.

  13. Kernel representations for behaviors over finite rings

    NARCIS (Netherlands)

    Kuijper, M.; Pinto, R.; Polderman, J.W.; Yamamoto, Y.

    2006-01-01

    In this paper we consider dynamical systems finite rings. The rings that we study are the integers modulo a power of a given prime. We study the theory of representations for such systems, in particular kernel representations.

  14. Ensemble Approach to Building Mercer Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive...

  15. Generalized Binomial Convolution of the mth Powers of the Consecutive Integers with the General Fibonacci Sequence

    Directory of Open Access Journals (Sweden)

    Kılıç Emrah

    2016-12-01

    Full Text Available In this paper, we consider Gauthier’s generalized convolution and then define its binomial analogue as well as alternating binomial analogue. We formulate these convolutions and give some applications of them.

  16. Preparing UO2 kernels by gelcasting

    Institute of Scientific and Technical Information of China (English)

    GUO Wenli; LIANG Tongxiang; ZHAO Xingyu; HAO Shaochang; LI Chengliang

    2009-01-01

    A process named gel-casting has been developed for the production of dense UO2 kernels for the high-ten-temperature gas-cooled reactor. Compared with the sol-gel process, the green microspheres can be got by dispersing the U3O8 slurry in gelcasting process, which means that gelcasting is a more facilitative process with less waste in fabricating UO2 kernels. The heat treatment.

  17. The Bergman kernel functions on Hua domains

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    We get the Bergman kernel functions in explicit formulas on four types of Hua domain.There are two key steps: First, we give the holomorphic automorphism groups of four types of Hua domain; second, we introduce the concept of semi-Reinhardt domain and give their complete orthonormal systems. Based on these two aspects we obtain the Bergman kernel function in explicit formulas on Hua domains.

  18. Fractal Weyl law for Linux Kernel Architecture

    CERN Document Server

    Ermann, L; Shepelyansky, D L

    2010-01-01

    We study the properties of spectrum and eigenstates of the Google matrix of a directed network formed by the procedure calls in the Linux Kernel. Our results obtained for various versions of the Linux Kernel show that the spectrum is characterized by the fractal Weyl law established recently for systems of quantum chaotic scattering and the Perron-Frobenius operators of dynamical maps. The fractal Weyl exponent is found to be $\

  19. Varying kernel density estimation on ℝ+

    Science.gov (United States)

    Mnatsakanov, Robert; Sarkisian, Khachatur

    2015-01-01

    In this article a new nonparametric density estimator based on the sequence of asymmetric kernels is proposed. This method is natural when estimating an unknown density function of a positive random variable. The rates of Mean Squared Error, Mean Integrated Squared Error, and the L1-consistency are investigated. Simulation studies are conducted to compare a new estimator and its modified version with traditional kernel density construction. PMID:26740729

  20. Adaptively Learning the Crowd Kernel

    CERN Document Server

    Tamuz, Omer; Belongie, Serge; Shamir, Ohad; Kalai, Adam Tauman

    2011-01-01

    We introduce an algorithm that, given n objects, learns a similarity matrix over all n^2 pairs, from crowdsourced data alone. The algorithm samples responses to adaptively chosen triplet-based relative-similarity queries. Each query has the form "is object 'a' more similar to 'b' or to 'c'?" and is chosen to be maximally informative given the preceding responses. The output is an embedding of the objects into Euclidean space (like MDS); we refer to this as the "crowd kernel." The runtime (empirically observed to be linear) and cost (about $0.15 per object) of the algorithm are small enough to permit its application to databases of thousands of objects. The distance matrix provided by the algorithm allows for the development of an intuitive and powerful sequential, interactive search algorithm which we demonstrate for a variety of visual stimuli. We present quantitative results that demonstrate the benefit in cost and time of our approach compared to a nonadaptive approach. We also show the ability of our appr...

  1. Evaluating the Gradient of the Thin Wire Kernel

    Science.gov (United States)

    Wilton, Donald R.; Champagne, Nathan J.

    2008-01-01

    Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.

  2. On the Inclusion Relation of Reproducing Kernel Hilbert Spaces

    OpenAIRE

    Zhang, Haizhang; Zhao, Liang

    2011-01-01

    To help understand various reproducing kernels used in applied sciences, we investigate the inclusion relation of two reproducing kernel Hilbert spaces. Characterizations in terms of feature maps of the corresponding reproducing kernels are established. A full table of inclusion relations among widely-used translation invariant kernels is given. Concrete examples for Hilbert-Schmidt kernels are presented as well. We also discuss the preservation of such a relation under various operations of ...

  3. Convolution Models with Shift-invariant kernel based on Matlab-GPU platform for Fast Acoustic Imaging

    OpenAIRE

    Chu, Ning; Gac, Nicolas; Picheral, José; Mohammad-Djafari, Ali

    2014-01-01

    International audience; Acoustic imaging is an advanced technique for acoustic source localization and power reconstruc-tion from limited noisy measurements at microphone sensors. This technique not only involves in a forward model of acoustic propagation from sources to sensors, but also its numerical solution of an ill-posed inverse problem. Nowadays, the Bayesian inference methods in inverse methods have been widely investigated for robust acoustic imaging, but most of Bayesian methods are...

  4. Radio Signal Augmentation for Improved Training of a Convolutional Neural Network

    Science.gov (United States)

    2016-09-01

    parameters of the network. Examples of these parameters include: • Input data dimensions and channels (e.g., image size and colors) • Size of convolutional ...filters • Number of convolutional filters • Pooling/downsampling size and method (e.g., max-pool or average) • Number of convolution and pooling...TECHNICAL REPORT 3055 September 2016 Radio Signal Augmentation for Improved Training of a Convolutional Neural Network Daniel

  5. Efficient Partitioning of Algorithms for Long Convolutions and their Mapping onto Architectures

    NARCIS (Netherlands)

    Bierens, L.; Deprettere, E.

    1998-01-01

    We present an efficient approach for the partitioning of algorithms implementing long convolutions. The dependence graph (DG) of a convolution algorithm is locally sequential globally parallel (LSGP) partitioned into smaller, less complex convolution algorithms. The LSGP partitioned DG is mapped ont

  6. General Purpose Convolution Algorithm in S4 Classes by Means of FFT

    Directory of Open Access Journals (Sweden)

    Peter Ruckdeschel

    2014-08-01

    By means of object orientation this default algorithm is overloaded by more specific algorithms where possible, in particular where explicit convolution formulae are available. Our focus is on R package distr which implements this approach, overloading operator + for convolution; based on this convolution, we define a whole arithmetics of mathematical operations acting on distribution objects, comprising operators +, -, *, /, and ^.

  7. Kernel abortion in maize : I. Carbohydrate concentration patterns and Acid invertase activity of maize kernels induced to abort in vitro.

    Science.gov (United States)

    Hanft, J M; Jones, R J

    1986-06-01

    Kernels cultured in vitro were induced to abort by high temperature (35 degrees C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35 degrees C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth.

  8. Robust Fusion of Irregularly Sampled Data Using Adaptive Normalized Convolution

    NARCIS (Netherlands)

    Pham, T.Q.; Van Vliet, L.J.; Schutte, K.

    2006-01-01

    We present a novel algorithm for image fusion from irregularly sampled data. The method is based on the framework of normalized convolution (NC), in which the local signal is approximated through a projection onto a subspace. The use of polynomial basis functions in this paper makes NC equivalent to

  9. A single Chip Implementation for Fast Convolution of Long Sequences

    NARCIS (Netherlands)

    Zwartenkot, H.T.J.; Boerrigter, M.J.G.; Bierens, L.H.J.; Smit, J.

    1996-01-01

    Usually, long convolutions are computed by programmable DSP boards using long FFTs. Typical operational requirements such as minimum power dissipation, minimum volume and high dynamic range/accuracy, make this solution often inefficient and even unacceptable. In this paper we present a single chip f

  10. Unsupervised pre-training for fully convolutional neural networks

    NARCIS (Netherlands)

    Wiehman, Stiaan; Kroon, Steve; Villiers, De Hendrik

    2017-01-01

    Unsupervised pre-training of neural networks has been shown to act as a regularization technique, improving performance and reducing model variance. Recently, fully convolutional networks (FCNs) have shown state-of-the-art results on various semantic segmentation tasks. Unfortunately, there is no ef

  11. Face recognition: a convolutional neural-network approach.

    Science.gov (United States)

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  12. CICAAR - Convolutive ICA with an Auto-Regressive Inverse Model

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Hansen, Lars Kai

    2004-01-01

    We invoke an auto-regressive IIR inverse model for convolutive ICA and derive expressions for the likelihood and its gradient. We argue that optimization will give a stable inverse. When there are more sensors than sources the mixing model parameters are estimated in a second step by least squares...

  13. Real-time rendering of optical effects using spatial convolution

    Science.gov (United States)

    Rokita, Przemyslaw

    1998-03-01

    Simulation of special effects such as: defocus effect, depth-of-field effect, raindrops or water film falling on the windshield, may be very useful in visual simulators and in all computer graphics applications that need realistic images of outdoor scenery. Those effects are especially important in rendering poor visibility conditions in flight and driving simulators, but can also be applied, for example, in composing computer graphics and video sequences- -i.e. in Augmented Reality systems. This paper proposes a new approach to the rendering of those optical effects by iterative adaptive filtering using spatial convolution. The advantage of this solution is that the adaptive convolution can be done in real-time by existing hardware. Optical effects mentioned above can be introduced into the image computed using conventional camera model by applying to the intensity of each pixel the convolution filter having an appropriate point spread function. The algorithms described in this paper can be easily implemented int the visualization pipeline--the final effect may be obtained by iterative filtering using a single hardware convolution filter or with the pipeline composed of identical 3 X 3 filters placed as the stages of this pipeline. Another advantage of the proposed solution is that the extension based on proposed algorithm can be added to the existing rendering systems as a final stage of the visualization pipeline.

  14. Convolution operators defined by singular measures on the motion group

    CERN Document Server

    Brandolini, Luca; Thangavelu, Sundaram; Travaglini, Giancarlo

    2010-01-01

    This paper contains an $L^{p}$ improving result for convolution operators defined by singular measures associated to hypersurfaces on the motion group. This needs only mild geometric properties of the surfaces, and it extends earlier results on Radon type transforms on $\\mathbb{R}^{n}$. The proof relies on the harmonic analysis on the motion group.

  15. Review of the convolution algorithm for evaluating service integrated systems

    DEFF Research Database (Denmark)

    Iversen, Villy Bæk

    1997-01-01

    In this paper we give a review of the applicability of the convolution algorithm. By this we are able to evaluate communication networks end--to--end with e.g. BPP multi-ratetraffic models insensitive to the holding time distribution. Rearrangement, minimum allocation, and maximum allocation are ...

  16. Behaviour at infinity of solutions of twisted convolution equations

    Energy Technology Data Exchange (ETDEWEB)

    Volchkov, Valerii V; Volchkov, Vitaly V [Donetsk National University, Donetsk (Ukraine)

    2012-02-28

    We obtain a precise characterization of the minimal rate of growth at infinity of non-trivial solutions of twisted convolution equations in unbounded domains of C{sup n}. As an application, we obtain definitive versions of the two-radii theorem for twisted spherical means.

  17. Two-level convolution formula for nuclear structure function

    Science.gov (United States)

    Ma, Boqiang

    1990-05-01

    A two-level convolution formula for the nuclear structure function is derived in considering the nucleus as a composite system of baryon-mesons which are also composite systems of quark-gluons again. The results show that the European Muon Colaboration effect can not be explained by the nuclear effects as nucleon Fermi motion and nuclear binding contributions.

  18. Two-Dimensional Tail-Biting Convolutional Codes

    CERN Document Server

    Alfandary, Liam

    2011-01-01

    The multidimensional convolutional codes are an extension of the notion of convolutional codes (CCs) to several dimensions of time. This paper explores the class of two-dimensional convolutional codes (2D CCs) and 2D tail-biting convolutional codes (2D TBCCs), in particular, from several aspects. First, we derive several basic algebraic properties of these codes, applying algebraic methods in order to find bijective encoders, create parity check matrices and to inverse encoders. Next, we discuss the minimum distance and weight distribution properties of these codes. Extending an existing tree-search algorithm to two dimensions, we apply it to find codes with high minimum distance. Word-error probability asymptotes for sample codes are given and compared with other codes. The results of this approach suggest that 2D TBCCs can perform better than comparable 1D TBCCs or other codes. We then present several novel iterative suboptimal algorithms for soft decoding 2D CCs, which are based on belief propagation. Two ...

  19. Yetter-Drinfel‘d Module and Convolution Module

    Institute of Scientific and Technical Information of China (English)

    张良云; 王栓宏

    2002-01-01

    In this paper,we first give a sufficent and necessary condition for a Hopf algebra to be a Yetter-Drinfel'd module,and prove that the finite dual of a Yetter-Drinfel'd module is still a Yetter-Drinfel'd module,Finally,we introduce a concept of convolution module.

  20. On the generalized Hamming weights of convolutional codes

    NARCIS (Netherlands)

    Rosenthal, J.; York, E.V.

    1995-01-01

    Motivated by applications in cryptology K. Wei introduced in 1991 the concept of a generalized Hamming weight for a linear block code. In this paper we define generalized Hamming weights for the class of convolutional codes and we derive several of their basic properties. By restricting to convoluti

  1. Robust Fusion of Irregularly Sampled Data Using Adaptive Normalized Convolution

    NARCIS (Netherlands)

    Pham, T.Q.; Vliet, L.J. van; Schutte, K.

    2006-01-01

    We present a novel algorithm for image fusion from irregularly sampled data. The method is based on the framework of normalized convolution (NC), in which the local signal is approximated through a projection onto a subspace. The use of polynomial basis functions in this paper makes NC equivalent to

  2. A single Chip Implementation for Fast Convolution of Long Sequences

    NARCIS (Netherlands)

    Zwartenkot, H.T.J.; Boerrigter, M.J.G.; Bierens, L.H.J.; Smit, J.

    1996-01-01

    Usually, long convolutions are computed by programmable DSP boards using long FFTs. Typical operational requirements such as minimum power dissipation, minimum volume and high dynamic range/accuracy, make this solution often inefficient and even unacceptable. In this paper we present a single chip

  3. Effect of mixing scanner types and reconstruction kernels on the characterization of lung parenchymal pathologies: emphysema, interstitial pulmonary fibrosis and normal non-smokers

    Science.gov (United States)

    Xu, Ye; van Beek, Edwin J.; McLennan, Geoffrey; Guo, Junfeng; Sonka, Milan; Hoffman, Eric

    2006-03-01

    In this study we utilize our texture characterization software (3-D AMFM) to characterize interstitial lung diseases (including emphysema) based on MDCT generated volumetric data using 3-dimensional texture features. We have sought to test whether the scanner and reconstruction filter (kernel) type affect the classification of lung diseases using the 3-D AMFM. We collected MDCT images in three subject groups: emphysema (n=9), interstitial pulmonary fibrosis (IPF) (n=10), and normal non-smokers (n=9). In each group, images were scanned either on a Siemens Sensation 16 or 64-slice scanner, (B50f or B30 recon. kernel) or a Philips 4-slice scanner (B recon. kernel). A total of 1516 volumes of interest (VOIs; 21x21 pixels in plane) were marked by two chest imaging experts using the Iowa Pulmonary Analysis Software Suite (PASS). We calculated 24 volumetric features. Bayesian methods were used for classification. Images from different scanners/kernels were combined in all possible combinations to test how robust the tissue classification was relative to the differences in image characteristics. We used 10-fold cross validation for testing the result. Sensitivity, specificity and accuracy were calculated. One-way Analysis of Variances (ANOVA) was used to compare the classification result between the various combinations of scanner and reconstruction kernel types. This study yielded a sensitivity of 94%, 91%, 97%, and 93% for emphysema, ground-glass, honeycombing, and normal non-smoker patterns respectively using a mixture of all three subject groups. The specificity for these characterizations was 97%, 99%, 99%, and 98%, respectively. The F test result of ANOVA shows there is no significant difference (p <0.05) between different combinations of data with respect to scanner and convolution kernel type. Since different MDCT and reconstruction kernel types did not show significant differences in regards to the classification result, this study suggests that the 3-D AMFM can

  4. Experimental validation of an optimized signal processing method to handle non-linearity in swept-source optical coherence tomography.

    Science.gov (United States)

    Vergnole, Sébastien; Lévesque, Daniel; Lamouche, Guy

    2010-05-10

    We evaluate various signal processing methods to handle the non-linearity in wavenumber space exhibited by most laser sources for swept-source optical coherence tomography. The following methods are compared for the same set of experimental data: non-uniform discrete Fourier transforms with Vandermonde matrix or with Lomb periodogram, resampling with linear interpolation or spline interpolation prior to fast-Fourier transform (FFT), and resampling with convolution prior to FFT. By selecting an optimized Kaiser-Bessel window to perform the convolution, we show that convolution followed by FFT is the most efficient method. It allows small fractional oversampling factor between 1 and 2, thus a minimal computational time, while retaining an excellent image quality. (c) 2010 Optical Society of America.

  5. Numerical integration of the extended variable generalized Langevin equation with a positive Prony representable memory kernel.

    Science.gov (United States)

    Baczewski, Andrew D; Bond, Stephen D

    2013-07-28

    Generalized Langevin dynamics (GLD) arise in the modeling of a number of systems, ranging from structured fluids that exhibit a viscoelastic mechanical response, to biological systems, and other media that exhibit anomalous diffusive phenomena. Molecular dynamics (MD) simulations that include GLD in conjunction with external and/or pairwise forces require the development of numerical integrators that are efficient, stable, and have known convergence properties. In this article, we derive a family of extended variable integrators for the Generalized Langevin equation with a positive Prony series memory kernel. Using stability and error analysis, we identify a superlative choice of parameters and implement the corresponding numerical algorithm in the LAMMPS MD software package. Salient features of the algorithm include exact conservation of the first and second moments of the equilibrium velocity distribution in some important cases, stable behavior in the limit of conventional Langevin dynamics, and the use of a convolution-free formalism that obviates the need for explicit storage of the time history of particle velocities. Capability is demonstrated with respect to accuracy in numerous canonical examples, stability in certain limits, and an exemplary application in which the effect of a harmonic confining potential is mapped onto a memory kernel.

  6. Numerical integration of the extended variable generalized Langevin equation with a positive Prony representable memory kernel

    Science.gov (United States)

    Baczewski, Andrew D.; Bond, Stephen D.

    2013-07-01

    Generalized Langevin dynamics (GLD) arise in the modeling of a number of systems, ranging from structured fluids that exhibit a viscoelastic mechanical response, to biological systems, and other media that exhibit anomalous diffusive phenomena. Molecular dynamics (MD) simulations that include GLD in conjunction with external and/or pairwise forces require the development of numerical integrators that are efficient, stable, and have known convergence properties. In this article, we derive a family of extended variable integrators for the Generalized Langevin equation with a positive Prony series memory kernel. Using stability and error analysis, we identify a superlative choice of parameters and implement the corresponding numerical algorithm in the LAMMPS MD software package. Salient features of the algorithm include exact conservation of the first and second moments of the equilibrium velocity distribution in some important cases, stable behavior in the limit of conventional Langevin dynamics, and the use of a convolution-free formalism that obviates the need for explicit storage of the time history of particle velocities. Capability is demonstrated with respect to accuracy in numerous canonical examples, stability in certain limits, and an exemplary application in which the effect of a harmonic confining potential is mapped onto a memory kernel.

  7. Single pass kernel -means clustering method

    Indian Academy of Sciences (India)

    T Hitendra Sarma; P Viswanath; B Eswara Reddy

    2013-06-01

    In unsupervised classification, kernel -means clustering method has been shown to perform better than conventional -means clustering method in identifying non-isotropic clusters in a data set. The space and time requirements of this method are $O(n^2)$, where is the data set size. Because of this quadratic time complexity, the kernel -means method is not applicable to work with large data sets. The paper proposes a simple and faster version of the kernel -means clustering method, called single pass kernel k-means clustering method. The proposed method works as follows. First, a random sample $\\mathcal{S}$ is selected from the data set $\\mathcal{D}$. A partition $\\Pi_{\\mathcal{S}}$ is obtained by applying the conventional kernel -means method on the random sample $\\mathcal{S}$. The novelty of the paper is, for each cluster in $\\Pi_{\\mathcal{S}}$, the exact cluster center in the input space is obtained using the gradient descent approach. Finally, each unsampled pattern is assigned to its closest exact cluster center to get a partition of the entire data set. The proposed method needs to scan the data set only once and it is much faster than the conventional kernel -means method. The time complexity of this method is $O(s^2+t+nk)$ where is the size of the random sample $\\mathcal{S}$, is the number of clusters required, and is the time taken by the gradient descent method (to find exact cluster centers). The space complexity of the method is $O(s^2)$. The proposed method can be easily implemented and is suitable for large data sets, like those in data mining applications. Experimental results show that, with a small loss of quality, the proposed method can significantly reduce the time taken than the conventional kernel -means clustering method. The proposed method is also compared with other recent similar methods.

  8. Kernel-Based Reconstruction of Graph Signals

    Science.gov (United States)

    Romero, Daniel; Ma, Meng; Giannakis, Georgios B.

    2017-02-01

    A number of applications in engineering, social sciences, physics, and biology involve inference over networks. In this context, graph signals are widely encountered as descriptors of vertex attributes or features in graph-structured data. Estimating such signals in all vertices given noisy observations of their values on a subset of vertices has been extensively analyzed in the literature of signal processing on graphs (SPoG). This paper advocates kernel regression as a framework generalizing popular SPoG modeling and reconstruction and expanding their capabilities. Formulating signal reconstruction as a regression task on reproducing kernel Hilbert spaces of graph signals permeates benefits from statistical learning, offers fresh insights, and allows for estimators to leverage richer forms of prior information than existing alternatives. A number of SPoG notions such as bandlimitedness, graph filters, and the graph Fourier transform are naturally accommodated in the kernel framework. Additionally, this paper capitalizes on the so-called representer theorem to devise simpler versions of existing Thikhonov regularized estimators, and offers a novel probabilistic interpretation of kernel methods on graphs based on graphical models. Motivated by the challenges of selecting the bandwidth parameter in SPoG estimators or the kernel map in kernel-based methods, the present paper further proposes two multi-kernel approaches with complementary strengths. Whereas the first enables estimation of the unknown bandwidth of bandlimited signals, the second allows for efficient graph filter selection. Numerical tests with synthetic as well as real data demonstrate the merits of the proposed methods relative to state-of-the-art alternatives.

  9. A new Mercer sigmoid kernel for clinical data classification.

    Science.gov (United States)

    Carrington, André M; Fieguth, Paul W; Chen, Helen H

    2014-01-01

    In classification with Support Vector Machines, only Mercer kernels, i.e. valid kernels, such as the Gaussian RBF kernel, are widely accepted and thus suitable for clinical data. Practitioners would also like to use the sigmoid kernel, a non-Mercer kernel, but its range of validity is difficult to determine, and even within range its validity is in dispute. Despite these shortcomings the sigmoid kernel is used by some, and two kernels in the literature attempt to emulate and improve upon it. We propose the first Mercer sigmoid kernel, that is therefore trustworthy for the classification of clinical data. We show the similarity between the Mercer sigmoid kernel and the sigmoid kernel and, in the process, identify a normalization technique that improves the classification accuracy of the latter. The Mercer sigmoid kernel achieves the best mean accuracy on three clinical data sets, detecting melanoma in skin lesions better than the most popular kernels; while with non-clinical data sets it has no significant difference in median accuracy as compared with the Gaussian RBF kernel. It consistently classifies some points correctly that the Gaussian RBF kernel does not and vice versa.

  10. Pattern Classification of Signals Using Fisher Kernels

    Directory of Open Access Journals (Sweden)

    Yashodhan Athavale

    2012-01-01

    Full Text Available The intention of this study is to gauge the performance of Fisher kernels for dimension simplification and classification of time-series signals. Our research work has indicated that Fisher kernels have shown substantial improvement in signal classification by enabling clearer pattern visualization in three-dimensional space. In this paper, we will exhibit the performance of Fisher kernels for two domains: financial and biomedical. The financial domain study involves identifying the possibility of collapse or survival of a company trading in the stock market. For assessing the fate of each company, we have collected financial time-series composed of weekly closing stock prices in a common time frame, using Thomson Datastream software. The biomedical domain study involves knee signals collected using the vibration arthrometry technique. This study uses the severity of cartilage degeneration for classifying normal and abnormal knee joints. In both studies, we apply Fisher Kernels incorporated with a Gaussian mixture model (GMM for dimension transformation into feature space, which is created as a three-dimensional plot for visualization and for further classification using support vector machines. From our experiments we observe that Fisher Kernel usage fits really well for both kinds of signals, with low classification error rates.

  11. Analog forecasting with dynamics-adapted kernels

    Science.gov (United States)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  12. Simultaneous Tumor Segmentation, Image Restoration, and Blur Kernel Estimation in PET Using Multiple Regularizations.

    Science.gov (United States)

    Li, Laquan; Wang, Jian; Lu, Wei; Tan, Shan

    2017-02-01

    Accurate tumor segmentation from PET images is crucial in many radiation oncology applications. Among others, partial volume effect (PVE) is recognized as one of the most important factors degrading imaging quality and segmentation accuracy in PET. Taking into account that image restoration and tumor segmentation are tightly coupled and can promote each other, we proposed a variational method to solve both problems simultaneously in this study. The proposed method integrated total variation (TV) semi-blind de-convolution and Mumford-Shah segmentation with multiple regularizations. Unlike many existing energy minimization methods using either TV or L2 regularization, the proposed method employed TV regularization over tumor edges to preserve edge information, and L2 regularization inside tumor regions to preserve the smooth change of the metabolic uptake in a PET image. The blur kernel was modeled as anisotropic Gaussian to address the resolution difference in transverse and axial directions commonly seen in a clinic PET scanner. The energy functional was rephrased using the Γ-convergence approximation and was iteratively optimized using the alternating minimization (AM) algorithm. The performance of the proposed method was validated on a physical phantom and two clinic datasets with non-Hodgkin's lymphoma and esophageal cancer, respectively. Experimental results demonstrated that the proposed method had high performance for simultaneous image restoration, tumor segmentation and scanner blur kernel estimation. Particularly, the recovery coefficients (RC) of the restored images of the proposed method in the phantom study were close to 1, indicating an efficient recovery of the original blurred images; for segmentation the proposed method achieved average dice similarity indexes (DSIs) of 0.79 and 0.80 for two clinic datasets, respectively; and the relative errors of the estimated blur kernel widths were less than 19% in the transversal direction and 7% in the axial

  13. An Implementation of Error Minimization Data Transmission in OFDM using Modified Convolutional Code

    Directory of Open Access Journals (Sweden)

    Hendy Briantoro

    2016-04-01

    Full Text Available This paper presents about error minimization in OFDM system. In conventional system, usually using channel coding such as BCH Code or Convolutional Code. But, performance BCH Code or Convolutional Code is not good in implementation of OFDM System. Error bits of OFDM system without channel coding is 5.77%. Then, we used convolutional code with code rate 1/2, it can reduce error bitsonly up to 3.85%. So, we proposed OFDM system with Modified Convolutional Code. In this implementation, we used Software Define Radio (SDR, namely Universal Software Radio Peripheral (USRP NI 2920 as the transmitter and receiver. The result of OFDM system using Modified Convolutional Code with code rate is able recover all character received so can decrease until 0% error bit. Increasing performance of Modified Convolutional Code is about 1 dB in BER of 10-4 from BCH Code and Convolutional Code. So, performance of Modified Convolutional better than BCH Code or Convolutional Code. Keywords: OFDM, BCH Code, Convolutional Code, Modified Convolutional Code, SDR, USRP

  14. Object classification and detection with context kernel descriptors

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2014-01-01

    Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...... consistency of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature selection, Kernel Entropy Component Analysis (KECA) is exploited to learn a subset of discriminative CKD. Different from Kernel Principal Component...

  15. OS X and iOS Kernel Programming

    CERN Document Server

    Halvorsen, Ole Henry

    2011-01-01

    OS X and iOS Kernel Programming combines essential operating system and kernel architecture knowledge with a highly practical approach that will help you write effective kernel-level code. You'll learn fundamental concepts such as memory management and thread synchronization, as well as the I/O Kit framework. You'll also learn how to write your own kernel-level extensions, such as device drivers for USB and Thunderbolt devices, including networking, storage and audio drivers. OS X and iOS Kernel Programming provides an incisive and complete introduction to the XNU kernel, which runs iPhones, i

  16. The scalar field kernel in cosmological spaces

    Energy Technology Data Exchange (ETDEWEB)

    Koksma, Jurjen F; Prokopec, Tomislav [Institute for Theoretical Physics (ITP) and Spinoza Institute, Utrecht University, Postbus 80195, 3508 TD Utrecht (Netherlands); Rigopoulos, Gerasimos I [Helsinki Institute of Physics, University of Helsinki, PO Box 64, FIN-00014 (Finland)], E-mail: J.F.Koksma@phys.uu.nl, E-mail: T.Prokopec@phys.uu.nl, E-mail: gerasimos.rigopoulos@helsinki.fi

    2008-06-21

    We construct the quantum-mechanical evolution operator in the functional Schroedinger picture-the kernel-for a scalar field in spatially homogeneous FLRW spacetimes when the field is (a) free and (b) coupled to a spacetime-dependent source term. The essential element in the construction is the causal propagator, linked to the commutator of two Heisenberg picture scalar fields. We show that the kernels can be expressed solely in terms of the causal propagator and derivatives of the causal propagator. Furthermore, we show that our kernel reveals the standard light cone structure in FLRW spacetimes. We finally apply the result to Minkowski spacetime, to de Sitter spacetime and calculate the forward time evolution of the vacuum in a general FLRW spacetime.

  17. Robust Visual Tracking via Fuzzy Kernel Representation

    Directory of Open Access Journals (Sweden)

    Zhiqiang Wen

    2013-05-01

    Full Text Available A robust visual kernel tracking approach is presented for solving the problem of existing background pixels in object model. At first, after definition of fuzzy set on image is given, a fuzzy factor is embedded into object model to form the fuzzy kernel representation. Secondly, a fuzzy membership functions are generated by center-surround approach and log likelihood ratio of feature distributions. Thirdly, details about fuzzy kernel tracking algorithm is provided. After that, methods of parameter selection and performance evaluation for tracking algorithm are proposed. At last, a mass of experimental results are done to show our method can reduce the influence of the incomplete representation of object model via integrating both color features and background features.

  18. Fractal Weyl law for Linux Kernel architecture

    Science.gov (United States)

    Ermann, L.; Chepelianskii, A. D.; Shepelyansky, D. L.

    2011-01-01

    We study the properties of spectrum and eigenstates of the Google matrix of a directed network formed by the procedure calls in the Linux Kernel. Our results obtained for various versions of the Linux Kernel show that the spectrum is characterized by the fractal Weyl law established recently for systems of quantum chaotic scattering and the Perron-Frobenius operators of dynamical maps. The fractal Weyl exponent is found to be ν ≈ 0.65 that corresponds to the fractal dimension of the network d ≈ 1.3. An independent computation of the fractal dimension by the cluster growing method, generalized for directed networks, gives a close value d ≈ 1.4. The eigenmodes of the Google matrix of Linux Kernel are localized on certain principal nodes. We argue that the fractal Weyl law should be generic for directed networks with the fractal dimension d < 2.

  19. Optoacoustic inversion via Volterra kernel reconstruction

    CERN Document Server

    Melchert, O; Roth, B

    2016-01-01

    In this letter we address the numeric inversion of optoacoustic signals to initial stress profiles. Therefore we put under scrutiny the optoacoustic kernel reconstruction problem in the paraxial approximation of the underlying wave-equation. We apply a Fourier-series expansion of the optoacoustic Volterra kernel and obtain the respective expansion coefficients for a given "apparative" setup by performing a gauge procedure using synthetic input data. The resulting effective kernel is subsequently used to solve the optoacoustic source reconstruction problem for general signals. We verify the validity of the proposed inversion protocol for synthetic signals and explore the feasibility of our approach to also account for the diffraction transformation of signals beyond the paraxial approximation.

  20. Tile-Compressed FITS Kernel for IRAF

    Science.gov (United States)

    Seaman, R.

    2011-07-01

    The Flexible Image Transport System (FITS) is a ubiquitously supported standard of the astronomical community. Similarly, the Image Reduction and Analysis Facility (IRAF), developed by the National Optical Astronomy Observatory, is a widely used astronomical data reduction package. IRAF supplies compatibility with FITS format data through numerous tools and interfaces. The most integrated of these is IRAF's FITS image kernel that provides access to FITS from any IRAF task that uses the basic IMIO interface. The original FITS kernel is a complex interface of purpose-built procedures that presents growing maintenance issues and lacks recent FITS innovations. A new FITS kernel is being developed at NOAO that is layered on the CFITSIO library from the NASA Goddard Space Flight Center. The simplified interface will minimize maintenance headaches as well as add important new features such as support for the FITS tile-compressed (fpack) format.

  1. A kernel-based approach for biomedical named entity recognition.

    Science.gov (United States)

    Patra, Rakesh; Saha, Sujan Kumar

    2013-01-01

    Support vector machine (SVM) is one of the popular machine learning techniques used in various text processing tasks including named entity recognition (NER). The performance of the SVM classifier largely depends on the appropriateness of the kernel function. In the last few years a number of task-specific kernel functions have been proposed and used in various text processing tasks, for example, string kernel, graph kernel, tree kernel and so on. So far very few efforts have been devoted to the development of NER task specific kernel. In the literature we found that the tree kernel has been used in NER task only for entity boundary detection or reannotation. The conventional tree kernel is unable to execute the complete NER task on its own. In this paper we have proposed a kernel function, motivated by the tree kernel, which is able to perform the complete NER task. To examine the effectiveness of the proposed kernel, we have applied the kernel function on the openly available JNLPBA 2004 data. Our kernel executes the complete NER task and achieves reasonable accuracy.

  2. Full Waveform Inversion Using Waveform Sensitivity Kernels

    Science.gov (United States)

    Schumacher, Florian; Friederich, Wolfgang

    2013-04-01

    We present a full waveform inversion concept for applications ranging from seismological to enineering contexts, in which the steps of forward simulation, computation of sensitivity kernels, and the actual inversion are kept separate of each other. We derive waveform sensitivity kernels from Born scattering theory, which for unit material perturbations are identical to the Born integrand for the considered path between source and receiver. The evaluation of such a kernel requires the calculation of Green functions and their strains for single forces at the receiver position, as well as displacement fields and strains originating at the seismic source. We compute these quantities in the frequency domain using the 3D spectral element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework. We developed and implemented the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion) to compute waveform sensitivity kernels from wavefields generated by any of the above methods (support for more methods is planned), where some examples will be shown. As the kernels can be computed independently from any data values, this approach allows to do a sensitivity and resolution analysis first without inverting any data. In the context of active seismic experiments, this property may be used to investigate optimal acquisition geometry and expectable resolution before actually collecting any data, assuming the background model is known sufficiently well. The actual inversion step then, can be repeated at relatively low costs with different (sub)sets of data, adding different smoothing conditions. Using the sensitivity kernels, we expect the waveform inversion to have better convergence properties compared with strategies that use gradients of a misfit function. Also the propagation of the forward wavefield and the backward propagation from the receiver

  3. Inverse of the String Theory KLT Kernel

    CERN Document Server

    Mizera, Sebastian

    2016-01-01

    The field theory Kawai-Lewellen-Tye (KLT) kernel, which relates scattering amplitudes of gravitons and gluons, turns out to be the inverse of a matrix whose components are bi-adjoint scalar partial amplitudes. In this note we propose an analogous construction for the string theory KLT kernel. We present simple diagrammatic rules for the computation of the $\\alpha'$-corrected bi-adjoint scalar amplitudes that are exact in $\\alpha'$. We find compact expressions in terms of graphs, where the standard Feynman propagators $1/p^2$ are replaced by either $1/\\sin (\\pi \\alpha' p^2)$ or $1/\\tan (\\pi \\alpha' p^2)$, which is determined by a recursive procedure.

  4. Reduced multiple empirical kernel learning machine.

    Science.gov (United States)

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  5. Volatile compound formation during argan kernel roasting.

    Science.gov (United States)

    El Monfalouti, Hanae; Charrouf, Zoubida; Giordano, Manuela; Guillaume, Dominique; Kartah, Badreddine; Harhar, Hicham; Gharby, Saïd; Denhez, Clément; Zeppa, Giuseppe

    2013-01-01

    Virgin edible argan oil is prepared by cold-pressing argan kernels previously roasted at 110 degrees C for up to 25 minutes. The concentration of 40 volatile compounds in virgin edible argan oil was determined as a function of argan kernel roasting time. Most of the volatile compounds begin to be formed after 15 to 25 minutes of roasting. This suggests that a strictly controlled roasting time should allow the modulation of argan oil taste and thus satisfy different types of consumers. This could be of major importance considering the present booming use of edible argan oil.

  6. Learning Rates for -Regularized Kernel Classifiers

    Directory of Open Access Journals (Sweden)

    Hongzhi Tong

    2013-01-01

    Full Text Available We consider a family of classification algorithms generated from a regularization kernel scheme associated with -regularizer and convex loss function. Our main purpose is to provide an explicit convergence rate for the excess misclassification error of the produced classifiers. The error decomposition includes approximation error, hypothesis error, and sample error. We apply some novel techniques to estimate the hypothesis error and sample error. Learning rates are eventually derived under some assumptions on the kernel, the input space, the marginal distribution, and the approximation error.

  7. Face Recognition Using Kernel Discriminant Analysis

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Linear Discrimiant Analysis (LDA) has demonstrated their success in face recognition. But LDA is difficult to handle the high nonlinear problems, such as changes of large viewpoint and illumination in face recognition. In order to overcome these problems, we investigate Kernel Discriminant Analysis (KDA) for face recognition. This approach adopts the kernel functions to replace the dot products of nonlinear mapping in the high dimensional feature space, and then the nonlinear problem can be solved in the input space conveniently without explicit mapping. Two face databases are used to test KDA approach. The results show that our approach outperforms the conventional PCA(Eigenface) and LDA(Fisherface) approaches.

  8. Image Super-Resolution Using Deep Convolutional Networks.

    Science.gov (United States)

    Dong, Chao; Loy, Chen Change; He, Kaiming; Tang, Xiaoou

    2016-02-01

    We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.

  9. The analysis of VERITAS muon images using convolutional neural networks

    CERN Document Server

    Feng, Qi

    2016-01-01

    Imaging atmospheric Cherenkov telescopes (IACTs) are sensitive to rare gamma-ray photons, buried in the background of charged cosmic-ray (CR) particles, the flux of which is several orders of magnitude greater. The ability to separate gamma rays from CR particles is important, as it is directly related to the sensitivity of the instrument. This gamma-ray/CR-particle classification problem in IACT data analysis can be treated with the rapidly-advancing machine learning algorithms, which have the potential to outperform the traditional box-cut methods on image parameters. We present preliminary results of a precise classification of a small set of muon events using a convolutional neural networks model with the raw images as input features. We also show the possibility of using the convolutional neural networks model for regression problems, such as the radius and brightness measurement of muon events, which can be used to calibrate the throughput efficiency of IACTs.

  10. Convolution theorems for the linear canonical transform and their applications

    Institute of Scientific and Technical Information of China (English)

    DENG Bing; TAO Ran; WANG Yue

    2006-01-01

    As generalization of the fractional Fourier transform (FRFT), the linear canonical transform (LCT) has been used in several areas, including optics and signal processing. Many properties for this transform are already known, but the convolution theorems, similar to the version of the Fourier transform, are still to be determined. In this paper, the authors derive the convolution theorems for the LCT, and explore the sampling theorem and multiplicative filter for the band limited signal in the linear canonical domain. Finally, the sampling and reconstruction formulas are deduced, together with the construction methodology for the above mentioned multiplicative filter in the time domain based on fast Fourier transform (FFT), which has much lower computational load than the construction method in the linear canonical domain.

  11. Star-galaxy Classification Using Deep Convolutional Neural Networks

    CERN Document Server

    Kim, Edward J

    2016-01-01

    Most existing star-galaxy classifiers use the reduced summary information from catalogs, requiring careful feature extraction and selection. The latest advances in machine learning that use deep convolutional neural networks allow a machine to automatically learn the features directly from data, minimizing the need for input from human experts. We present a star-galaxy classification framework that uses deep convolutional neural networks (ConvNets) directly on the reduced, calibrated pixel values. Using data from the Sloan Digital Sky Survey (SDSS) and the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS), we demonstrate that ConvNets are able to produce accurate and well-calibrated probabilistic classifications that are competitive with conventional machine learning techniques. Future advances in deep learning may bring more success with current and forthcoming photometric surveys, such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST), because deep neural networks require...

  12. The analysis of VERITAS muon images using convolutional neural networks

    Science.gov (United States)

    Feng, Qi; Lin, Tony T. Y.; VERITAS Collaboration

    2017-06-01

    Imaging atmospheric Cherenkov telescopes (IACTs) are sensitive to rare gamma-ray photons, buried in the background of charged cosmic-ray (CR) particles, the flux of which is several orders of magnitude greater. The ability to separate gamma rays from CR particles is important, as it is directly related to the sensitivity of the instrument. This gamma-ray/CR-particle classification problem in IACT data analysis can be treated with the rapidly-advancing machine learning algorithms, which have the potential to outperform the traditional box-cut methods on image parameters. We present preliminary results of a precise classification of a small set of muon events using a convolutional neural networks model with the raw images as input features. We also show the possibility of using the convolutional neural networks model for regression problems, such as the radius and brightness measurement of muon events, which can be used to calibrate the throughput efficiency of IACTs.

  13. a Convolutional Network for Semantic Facade Segmentation and Interpretation

    Science.gov (United States)

    Schmitz, Matthias; Mayer, Helmut

    2016-06-01

    In this paper we present an approach for semantic interpretation of facade images based on a Convolutional Network. Our network processes the input images in a fully convolutional way and generates pixel-wise predictions. We show that there is no need for large datasets to train the network when transfer learning is employed, i. e., a part of an already existing network is used and fine-tuned, and when the available data is augmented by using deformed patches of the images for training. The network is trained end-to-end with patches of the images and each patch is augmented independently. To undo the downsampling for the classification, we add deconvolutional layers to the network. Outputs of different layers of the network are combined to achieve more precise pixel-wise predictions. We demonstrate the potential of our network based on results for the eTRIMS (Korč and Förstner, 2009) dataset reduced to facades.

  14. Self-Taught convolutional neural networks for short text clustering.

    Science.gov (United States)

    Xu, Jiaming; Xu, Bo; Wang, Peng; Zheng, Suncong; Tian, Guanhua; Zhao, Jun; Xu, Bo

    2017-01-12

    Short text clustering is a challenging problem due to its sparseness of text representation. Here we propose a flexible Self-Taught Convolutional neural network framework for Short Text Clustering (dubbed STC(2)), which can flexibly and successfully incorporate more useful semantic features and learn non-biased deep text representation in an unsupervised manner. In our framework, the original raw text features are firstly embedded into compact binary codes by using one existing unsupervised dimensionality reduction method. Then, word embeddings are explored and fed into convolutional neural networks to learn deep feature representations, meanwhile the output units are used to fit the pre-trained binary codes in the training process. Finally, we get the optimal clusters by employing K-means to cluster the learned representations. Extensive experimental results demonstrate that the proposed framework is effective, flexible and outperform several popular clustering methods when tested on three public short text datasets.

  15. Trajectory Generation Method with Convolution Operation on Velocity Profile

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Geon [Hanyang Univ., Seoul (Korea, Republic of); Kim, Doik [Korea Institute of Science and Technology, Daejeon (Korea, Republic of)

    2014-03-15

    The use of robots is no longer limited to the field of industrial robots and is now expanding into the fields of service and medical robots. In this light, a trajectory generation method that can respond instantaneously to the external environment is strongly required. Toward this end, this study proposes a method that enables a robot to change its trajectory in real-time using a convolution operation. The proposed method generates a trajectory in real time and satisfies the physical limits of the robot system such as acceleration and velocity limit. Moreover, a new way to improve the previous method, which generates inefficient trajectories in some cases owing to the characteristics of the trapezoidal shape of trajectories, is proposed by introducing a triangle shape. The validity and effectiveness of the proposed method is shown through a numerical simulation and a comparison with the previous convolution method.

  16. On New Bijective Convolution Operator Act for Analytic Functions

    Directory of Open Access Journals (Sweden)

    Oqlah Al-Refai

    2009-01-01

    Full Text Available Problem statement: We introduced a new bijective convolution linear operator defined on the class of normalized analytic functions. This operator was motivated by many researchers namely Srivastava, Owa, Ruscheweyh and many others. The operator was essential to obtain new classes of analytic functions. Approach: Simple technique of Ruscheweyh was used in our preliminary approach to create new bijective convolution linear operator. The preliminary concept of Hadamard products was mentioned and the concept of subordination was given to give sharp proofs for certain sufficient conditions of the linear operator aforementioned. In fact, the subordinating factor sequence was used to derive different types of subordination results. Results: Having the linear operator, subordination theorems were established by using standard concept of subordination. The results reduced to well-known results studied by various researchers. Coefficient bounds and inclusion properties, growth and closure theorems for some subclasses were also obtained. Conclusion: Therefore, many interesting results could be obtained and some applications could be gathered.

  17. Fibonacci Sequence, Recurrence Relations, Discrete Probability Distributions and Linear Convolution

    CERN Document Server

    Rajan, Arulalan; Rao, Ashok; Jamadagni, H S

    2012-01-01

    The classical Fibonacci sequence is known to exhibit many fascinating properties. In this paper, we explore the Fibonacci sequence and integer sequences generated by second order linear recurrence relations with positive integer coe?cients from the point of view of probability distributions that they induce. We obtain the generalizations of some of the known limiting properties of these probability distributions and present certain optimal properties of the classical Fibonacci sequence in this context. In addition, we also look at the self linear convolution of linear recurrence relations with positive integer coefficients. Analysis of self linear convolution is focused towards locating the maximum in the resulting sequence. This analysis, also highlights the influence that the largest positive real root, of the "characteristic equation" of the linear recurrence relations with positive integer coefficients, has on the location of the maximum. In particular, when the largest positive real root is 2,the locatio...

  18. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  19. Infimal Convolution Regularisation Functionals of BV and Lp Spaces

    KAUST Repository

    Burger, Martin

    2016-02-03

    We study a general class of infimal convolution type regularisation functionals suitable for applications in image processing. These functionals incorporate a combination of the total variation seminorm and Lp norms. A unified well-posedness analysis is presented and a detailed study of the one-dimensional model is performed, by computing exact solutions for the corresponding denoising problem and the case p=2. Furthermore, the dependency of the regularisation properties of this infimal convolution approach to the choice of p is studied. It turns out that in the case p=2 this regulariser is equivalent to the Huber-type variant of total variation regularisation. We provide numerical examples for image decomposition as well as for image denoising. We show that our model is capable of eliminating the staircasing effect, a well-known disadvantage of total variation regularisation. Moreover as p increases we obtain almost piecewise affine reconstructions, leading also to a better preservation of hat-like structures.

  20. Kernel methods and minimum contrast estimators for empirical deconvolution

    CERN Document Server

    Delaigle, Aurore

    2010-01-01

    We survey classical kernel methods for providing nonparametric solutions to problems involving measurement error. In particular we outline kernel-based methodology in this setting, and discuss its basic properties. Then we point to close connections that exist between kernel methods and much newer approaches based on minimum contrast techniques. The connections are through use of the sinc kernel for kernel-based inference. This `infinite order' kernel is not often used explicitly for kernel-based deconvolution, although it has received attention in more conventional problems where measurement error is not an issue. We show that in a comparison between kernel methods for density deconvolution, and their counterparts based on minimum contrast, the two approaches give identical results on a grid which becomes increasingly fine as the bandwidth decreases. In consequence, the main numerical differences between these two techniques are arguably the result of different approaches to choosing smoothing parameters.

  1. Kernel methods in orthogonalization of multi- and hypervariate data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    A kernel version of maximum autocorrelation factor (MAF) analysis is described very briefly and applied to change detection in remotely sensed hyperspectral image (HyMap) data. The kernel version is based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis...... via inner products in the Gram matrix only. In the kernel version the inner products are replaced by inner products between nonlinear mappings into higher dimensional feature space of the original data. Via kernel substitution also known as the kernel trick these inner products between the mappings...... are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MAF analysis handle nonlinearities by implicitly transforming data into high (even infinite...

  2. Variable kernel density estimation in high-dimensional feature spaces

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2017-02-01

    Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...

  3. HEAT KERNEL AND HARDY'S THEOREM FOR JACOBI TRANSFORM

    Institute of Scientific and Technical Information of China (English)

    T. KAWAZOE; LIU JIANMING(刘建明)

    2003-01-01

    In this paper, the authors obtain sharp upper and lower bounds for the heat kernel associatedwith Jacobi transform, and get some analogues of Hardy's Theorem for Jacobi transform byusing the sharp estimate of the heat kernel.

  4. Protein secondary structure prediction using deep convolutional neural fields

    OpenAIRE

    Sheng Wang; Jian Peng; Jianzhu Ma; Jinbo Xu

    2015-01-01

    Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF)...

  5. Interleaved Convolutional Code and Its Viterbi Decoder Architecture

    OpenAIRE

    2003-01-01

    We propose an area-efficient high-speed interleaved Viterbi decoder architecture, which is based on the state-parallel architecture with register exchange path memory structure, for interleaved convolutional code. The state-parallel architecture uses as many add-compare-select (ACS) units as the number of trellis states. By replacing each delay (or storage) element in state metrics memory (or path metrics memory) and path memory (or survival memory) with delays, interleaved Viterbi decoder ...

  6. Design of SVD/SGK Convolution Filters for Image Processing

    Science.gov (United States)

    1980-01-01

    of filters by transforming one-dimensional linear phase filters * into two-dimensional linear phase filters . By assuming that the prototype filter is a...linear phase filter , his algorithm transforms a one-dimensional filter h(u) into a two-dimensional filter W (u,v) by means of transformation given by...significance of their implementation of the designed filter is that a large two-dimensional convolution *A linear phase filter implies symmetry of the filter. 13

  7. Efficient Convolutional Neural Network with Binary Quantization Layer

    OpenAIRE

    Ravanbakhsh, Mahdyar; Mousavi, Hossein; Nabi, Moin; Marcenaro, Lucio; Regazzoni, Carlo

    2016-01-01

    In this paper we introduce a novel method for segmentation that can benefit from general semantics of Convolutional Neural Network (CNN). Our segmentation proposes visually and semantically coherent image segments. We use binary encoding of CNN features to overcome the difficulty of the clustering on the high-dimensional CNN feature space. These binary encoding can be embedded into the CNN as an extra layer at the end of the network. This results in real-time segmentation. To the best of our ...

  8. Fusing Deep Convolutional Networks for Large Scale Visual Concept Classification

    OpenAIRE

    Ergun, Hilal; Sert, Mustafa

    2016-01-01

    Deep learning architectures are showing great promise in various computer vision domains including image classification, object detection, event detection and action recognition. In this study, we investigate various aspects of convolutional neural networks (CNNs) from the big data perspective. We analyze recent studies and different network architectures both in terms of running time and accuracy. We present extensive empirical information along with best practices for big data practitioners...

  9. An obstruction for q-deformation of the convolution product

    CERN Document Server

    Van Leeuwen, H; van Leeuwen, Hans; Maassen, Hans

    1995-01-01

    We consider two independent q-Gaussian random variables X and Y and a function f chosen in such a way that f(X) and X have the same distribution. For 0 < q < 1 we find that at least the fourth moments of X + Y and f(X) + Y are different. We conclude that no q-deformed convolution product can exist for functions of independent q-Gaussian random variables.

  10. Contour Detection Using Cost-Sensitive Convolutional Neural Networks

    OpenAIRE

    Hwang, Jyh-Jing; Liu, Tyng-Luh

    2014-01-01

    We address the problem of contour detection via per-pixel classifications of edge point. To facilitate the process, the proposed approach leverages with DenseNet, an efficient implementation of multiscale convolutional neural networks (CNNs), to extract an informative feature vector for each pixel and uses an SVM classifier to accomplish contour detection. The main challenge lies in adapting a pre-trained per-image CNN model for yielding per-pixel image features. We propose to base on the Den...

  11. Analysis of maize ( Zea mays ) kernel density and volume using microcomputed tomography and single-kernel near-infrared spectroscopy.

    Science.gov (United States)

    Gustin, Jeffery L; Jackson, Sean; Williams, Chekeria; Patel, Anokhee; Armstrong, Paul; Peter, Gary F; Settles, A Mark

    2013-11-20

    Maize kernel density affects milling quality of the grain. Kernel density of bulk samples can be predicted by near-infrared reflectance (NIR) spectroscopy, but no accurate method to measure individual kernel density has been reported. This study demonstrates that individual kernel density and volume are accurately measured using X-ray microcomputed tomography (μCT). Kernel density was significantly correlated with kernel volume, air space within the kernel, and protein content. Embryo density and volume did not influence overall kernel density. Partial least-squares (PLS) regression of μCT traits with single-kernel NIR spectra gave stable predictive models for kernel density (R(2) = 0.78, SEP = 0.034 g/cm(3)) and volume (R(2) = 0.86, SEP = 2.88 cm(3)). Density and volume predictions were accurate for data collected over 10 months based on kernel weights calculated from predicted density and volume (R(2) = 0.83, SEP = 24.78 mg). Kernel density was significantly correlated with bulk test weight (r = 0.80), suggesting that selection of dense kernels can translate to improved agronomic performance.

  12. Nuclear norm regularized convolutional Max Pos@Top machine

    KAUST Repository

    Li, Qinfeng

    2016-11-18

    In this paper, we propose a novel classification model for the multiple instance data, which aims to maximize the number of positive instances ranked before the top-ranked negative instances. This method belongs to a recently emerged performance, named as Pos@Top. Our proposed classification model has a convolutional structure that is composed by four layers, i.e., the convolutional layer, the activation layer, the max-pooling layer and the full connection layer. In this paper, we propose an algorithm to learn the convolutional filters and the full connection weights to maximize the Pos@Top measure over the training set. Also, we try to minimize the rank of the filter matrix to explore the low-dimensional space of the instances in conjunction with the classification results. The rank minimization is conducted by the nuclear norm minimization of the filter matrix. In addition, we develop an iterative algorithm to solve the corresponding problem. We test our method on several benchmark datasets. The experimental results show the superiority of our method compared with other state-of-the-art Pos@Top maximization methods.

  13. Fine-grained representation learning in convolutional autoencoders

    Science.gov (United States)

    Luo, Chang; Wang, Jie

    2016-03-01

    Convolutional autoencoders (CAEs) have been widely used as unsupervised feature extractors for high-resolution images. As a key component in CAEs, pooling is a biologically inspired operation to achieve scale and shift invariances, and the pooled representation directly affects the CAEs' performance. Fine-grained pooling, which uses small and dense pooling regions, encodes fine-grained visual cues and enhances local characteristics. However, it tends to be sensitive to spatial rearrangements. In most previous works, pooled features were obtained by empirically modulating parameters in CAEs. We see the CAE as a whole and propose a fine-grained representation learning law to extract better fine-grained features. This representation learning law suggests two directions for improvement. First, we probabilistically evaluate the discrimination-invariance tradeoff with fine-grained granularity in the pooled feature maps, and suggest the proper filter scale in the convolutional layer and appropriate whitening parameters in preprocessing step. Second, pooling approaches are combined with the sparsity degree in pooling regions, and we propose the preferable pooling approach. Experimental results on two independent benchmark datasets demonstrate that our representation learning law could guide CAEs to extract better fine-grained features and performs better in multiclass classification task. This paper also provides guidance for selecting appropriate parameters to obtain better fine-grained representation in other convolutional neural networks.

  14. Deep Convolutional Neural Network for Inverse Problems in Imaging

    Science.gov (United States)

    Jin, Kyong Hwan; McCann, Michael T.; Froustey, Emmanuel; Unser, Michael

    2017-09-01

    In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyper parameter selection. The starting point of our work is the observation that unrolled iterative methods have the form of a CNN (filtering followed by point-wise non-linearity) when the normal operator (H*H, the adjoint of H times H) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill-posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a 512 x 512 image on GPU.

  15. A model of traffic signs recognition with convolutional neural network

    Science.gov (United States)

    Hu, Haihe; Li, Yujian; Zhang, Ting; Huo, Yi; Kuang, Wenqing

    2016-10-01

    In real traffic scenes, the quality of captured images are generally low due to some factors such as lighting conditions, and occlusion on. All of these factors are challengeable for automated recognition algorithms of traffic signs. Deep learning has provided a new way to solve this kind of problems recently. The deep network can automatically learn features from a large number of data samples and obtain an excellent recognition performance. We therefore approach this task of recognition of traffic signs as a general vision problem, with few assumptions related to road signs. We propose a model of Convolutional Neural Network (CNN) and apply the model to the task of traffic signs recognition. The proposed model adopts deep CNN as the supervised learning model, directly takes the collected traffic signs image as the input, alternates the convolutional layer and subsampling layer, and automatically extracts the features for the recognition of the traffic signs images. The proposed model includes an input layer, three convolutional layers, three subsampling layers, a fully-connected layer, and an output layer. To validate the proposed model, the experiments are implemented using the public dataset of China competition of fuzzy image processing. Experimental results show that the proposed model produces a recognition accuracy of 99.01 % on the training dataset, and yield a record of 92% on the preliminary contest within the fourth best.

  16. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    Science.gov (United States)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  17. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.

  18. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    Science.gov (United States)

    2016-01-05

    the kernel function which depends on the application and the model user. This research uses the most popular kernel function, the radial basis...an important role in the nation’s economy. Unfortunately, the system’s reliability is declining due to the aging components of the network [Grier...kernel function. Gaussian Bayesian kernel models became very popular recently and were extended and applied to a number of classification problems. An

  19. Calculation of the absorbed dose distribution due to irregularly shaped photon beams using pencil beam kernels derived from basic beam data

    Science.gov (United States)

    Storchi, Pascal; Woudstra, Evert

    1996-04-01

    In radiotherapy, accurately calculated dose distributions of irregularly shaped photon beams are needed. In this paper, an algorithm is presented which enables the calculation of dose distributions due to irregular fields using pencil beam kernels derived from simple basic beam data usually measured on treatment units, i.e. central axis depth - dose curves and profiles. The only extra data that are needed, and are not currently measured, is the phantom scatter factor curve at the reference depth. The algorithm has been developed as an extension to a previously developed algorithm for rectangular fields which is based on the Milan - Bentley storage model. In the case of an irregular field, the depth dose and the boundary function are computed by convolution of a field intensity function with pencil beam kernels. The depth dose is computed by using a `scatter' kernel, which is derived from the stored depth - dose curves and from the phantom scatter factor curve. The boundary function is computed by using a `boundary' kernel, which is derived from the boundary profile of a number of large square fields. Because of the simplicity of the data used and the underlying concepts, which for instance do not separate the head scatter from the primary beam, this algorithm presents some shortcomings. On the other hand, this simplicity is also of great advantage and the inaccuracy is acceptable for most clinical situations.

  20. An Extended Ockham Algebra with Endomorphism Kernel Property

    Institute of Scientific and Technical Information of China (English)

    Jie FANG

    2007-01-01

    An algebraic structure (∮) is said to have the endomorphism kernel property if every congruence on (∮) , other than the universal congruence, is the kernel of an endomorphism on (∮) .Inthis paper, we consider the EKP (that is, endomorphism kernel property) for an extended Ockham algebra (∮) . In particular, we describe the structure of the finite symmetric extended de Morgan algebras having EKP.

  1. End-use quality of soft kernel durum wheat

    Science.gov (United States)

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...

  2. 7 CFR 981.61 - Redetermination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of...

  3. Multiple spectral kernel learning and a gaussian complexity computation.

    Science.gov (United States)

    Reyhani, Nima

    2013-07-01

    Multiple kernel learning (MKL) partially solves the kernel selection problem in support vector machines and similar classifiers by minimizing the empirical risk over a subset of the linear combination of given kernel matrices. For large sample sets, the size of the kernel matrices becomes a numerical issue. In many cases, the kernel matrix is of low-efficient rank. However, the low-rank property is not efficiently utilized in MKL algorithms. Here, we suggest multiple spectral kernel learning that efficiently uses the low-rank property by finding a kernel matrix from a set of Gram matrices of a few eigenvectors from all given kernel matrices, called a spectral kernel set. We provide a new bound for the gaussian complexity of the proposed kernel set, which depends on both the geometry of the kernel set and the number of Gram matrices. This characterization of the complexity implies that in an MKL setting, adding more kernels may not monotonically increase the complexity, while previous bounds show otherwise.

  4. A Fast and Simple Graph Kernel for RDF

    NARCIS (Netherlands)

    de Vries, G.K.D.; de Rooij, S.

    2013-01-01

    In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster

  5. 7 CFR 981.60 - Determination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  6. 21 CFR 176.350 - Tamarind seed kernel powder.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...

  7. Heat kernel analysis for Bessel operators on symmetric cones

    DEFF Research Database (Denmark)

    Möllers, Jan

    2014-01-01

    . The heat kernel is explicitly given in terms of a multivariable $I$-Bessel function on $Ω$. Its corresponding heat kernel transform defines a continuous linear operator between $L^p$-spaces. The unitary image of the $L^2$-space under the heat kernel transform is characterized as a weighted Bergmann space...

  8. Stable Kernel Representations as Nonlinear Left Coprime Factorizations

    NARCIS (Netherlands)

    Paice, A.D.B.; Schaft, A.J. van der

    1994-01-01

    A representation of nonlinear systems based on the idea of representing the input-output pairs of the system as elements of the kernel of a stable operator has been recently introduced. This has been denoted the kernel representation of the system. In this paper it is demonstrated that the kernel

  9. Kernel Temporal Differences for Neural Decoding

    Directory of Open Access Journals (Sweden)

    Jihye Bae

    2015-01-01

    Full Text Available We study the feasibility and capability of the kernel temporal difference (KTD(λ algorithm for neural decoding. KTD(λ is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm’s convergence can be guaranteed for policy evaluation. The algorithm’s nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement. KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey’s neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm’s capabilities in reinforcement learning brain machine interfaces.

  10. Bergman kernel and complex singularity exponent

    Institute of Scientific and Technical Information of China (English)

    LEE; HanJin

    2009-01-01

    We give a precise estimate of the Bergman kernel for the model domain defined by Ω F={(z,w) ∈ C n+1:Im w |F (z)| 2 > 0},where F=(f 1,...,f m) is a holomorphic map from C n to C m,in terms of the complex singularity exponent of F.

  11. Kernel based subspace projection of hyperspectral images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF...

  12. Analytic properties of the Virasoro modular kernel

    CERN Document Server

    Nemkov, Nikita

    2016-01-01

    On the space of generic conformal blocks the modular transformation of the underlying surface is realized as a linear integral transformation. We show that the analytic properties of conformal block implied by Zamolodchikov's formula are shared by the kernel of the modular transformation and illustrate this by explicit computation in the case of the one-point toric conformal block.

  13. A Cubic Kernel for Feedback Vertex Set

    NARCIS (Netherlands)

    Bodlaender, H.L.

    2006-01-01

    The FEEDBACK VERTEX SET problem on unweighted, undirected graphs is considered. Improving upon a result by Burrage et al. [7], we show that this problem has a kernel with O(κ3) vertices, i.e., there is a polynomial time algorithm, that given a graph G and an integer κ, finds a graph G' and integer

  14. Analytic properties of the Virasoro modular kernel

    Energy Technology Data Exchange (ETDEWEB)

    Nemkov, Nikita [Moscow Institute of Physics and Technology (MIPT), Dolgoprudny (Russian Federation); Institute for Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); National University of Science and Technology MISIS, The Laboratory of Superconducting metamaterials, Moscow (Russian Federation)

    2017-06-15

    On the space of generic conformal blocks the modular transformation of the underlying surface is realized as a linear integral transformation. We show that the analytic properties of conformal block implied by Zamolodchikov's formula are shared by the kernel of the modular transformation and illustrate this by explicit computation in the case of the one-point toric conformal block. (orig.)

  15. Hyperbolic L2-modules with Reproducing Kernels

    Institute of Scientific and Technical Information of China (English)

    David EELPODE; Frank SOMMEN

    2006-01-01

    Abstract In this paper, the Dirac operator on the Klein model for the hyperbolic space is considered. A function space containing L2-functions on the sphere Sm-1 in (R)m, which are boundary values of solutions for this operator, is defined, and it is proved that this gives rise to a Hilbert module with a reproducing kernel.

  16. Protein Structure Prediction Using String Kernels

    Science.gov (United States)

    2006-03-03

    Prediction using String Kernels 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER...consists of 4352 sequences from SCOP version 1.53 extracted from the Astral database, grouped into families and superfamilies. The dataset is processed

  17. Bergman kernel and complex singularity exponent

    Institute of Scientific and Technical Information of China (English)

    CHEN BoYong; LEE HanJin

    2009-01-01

    We give a precise estimate of the Bergman kernel for the model domain defined by Ω_F = {(z,w) ∈ C~(n+1) : Imw - |F(z)|~2 > 0},where F = (f_1,... ,f_m) is a holomorphic map from C~n to C~m,in terms of the complex singularity exponent of F.

  18. Symbol recognition with kernel density matching.

    Science.gov (United States)

    Zhang, Wan; Wenyin, Liu; Zhang, Kun

    2006-12-01

    We propose a novel approach to similarity assessment for graphic symbols. Symbols are represented as 2D kernel densities and their similarity is measured by the Kullback-Leibler divergence. Symbol orientation is found by gradient-based angle searching or independent component analysis. Experimental results show the outstanding performance of this approach in various situations.

  19. Developing Linux kernel space device driver

    Institute of Scientific and Technical Information of China (English)

    Zheng Wei; Wang Qinruo; Wu Naiyou

    2003-01-01

    This thesis introduces how to develop kernel level device drivers on Linux platform in detail. On the basis of comparing proc file system with dev file system, we choose PCI devices and USB devices as instances to introduce the method of writing device drivers for character devices by using these two file systems.

  20. Heat Kernel Renormalization on Manifolds with Boundary

    OpenAIRE

    Albert, Benjamin I.

    2016-01-01

    In the monograph Renormalization and Effective Field Theory, Costello gave an inductive position space renormalization procedure for constructing an effective field theory that is based on heat kernel regularization of the propagator. In this paper, we extend Costello's renormalization procedure to a class of manifolds with boundary. In addition, we reorganize the presentation of the preexisting material, filling in details and strengthening the results.

  1. Covariant derivative expansion of the heat kernel

    Energy Technology Data Exchange (ETDEWEB)

    Salcedo, L.L. [Universidad de Granada, Departamento de Fisica Moderna, Granada (Spain)

    2004-11-01

    Using the technique of labeled operators, compact explicit expressions are given for all traced heat kernel coefficients containing zero, two, four and six covariant derivatives, and for diagonal coefficients with zero, two and four derivatives. The results apply to boundaryless flat space-times and arbitrary non-Abelian scalar and gauge background fields. (orig.)

  2. From prompt gamma distribution to dose: a novel approach combining an evolutionary algorithm and filtering based on Gaussian-powerlaw convolutions

    Science.gov (United States)

    Schumann, A.; Priegnitz, M.; Schoene, S.; Enghardt, W.; Rohling, H.; Fiedler, F.

    2016-10-01

    Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.

  3. A Kernel Approach to Multi-Task Learning with Task-Specific Kernels

    Institute of Scientific and Technical Information of China (English)

    Wei Wu; Hang Li; Yun-Hua Hu; Rong Jin

    2012-01-01

    Several kernel-based methods for multi-task learning have been proposed,which leverage relations among tasks as regularization to enhance the overall learning accuracies.These methods assume that the tasks share the same kernel,which could limit their applications because in practice different tasks may need different kernels.The main challenge of introducing multiple kernels into multiple tasks is that models from different reproducing kernel Hilbert spaces (RKHSs) are not comparable,making it difficult to exploit relations among tasks.This paper addresses the challenge by formalizing the problem in the square integrable space (SIS).Specially,it proposes a kernel-based method which makes use of a regularization term defined in SIS to represent task relations.We prove a new representer theorem for the proposed approach in SIS.We further derive a practical method for solving the learning problem and conduct consistency analysis of the method.We discuss the relationship between our method and an existing method.We also give an SVM (support vector machine)-based implementation of our method for multi-label classification.Experiments on an artificial example and two real-world datasets show that the proposed method performs better than the existing method.

  4. Kernel based orthogonalization for change detection in hyperspectral images

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    Kernel versions of principal component analysis (PCA) and minimum noise fraction (MNF) analysis are applied to change detection in hyperspectral image (HyMap) data. The kernel versions are based on so-called Q-mode analysis in which the data enter into the analysis via inner products in the Gram...... the kernel function and then performing a linear analysis in that space. An example shows the successful application of (kernel PCA and) kernel MNF analysis to change detection in HyMap data covering a small agricultural area near Lake Waging-Taching, Bavaria, in Southern Germany. In the change detection...

  5. Geodesic exponential kernels: When Curvature and Linearity Conflict

    DEFF Research Database (Denmark)

    Feragen, Aase; Lauze, François; Hauberg, Søren

    2015-01-01

    We consider kernel methods on general geodesic metric spaces and provide both negative and positive results. First we show that the common Gaussian kernel can only be generalized to a positive definite kernel on a geodesic metric space if the space is flat. As a result, for data on a Riemannian...... Laplacian kernel can be generalized while retaining positive definiteness. This implies that geodesic Laplacian kernels can be generalized to some curved spaces, including spheres and hyperbolic spaces. Our theoretical results are verified empirically....

  6. The pre-image problem in kernel methods.

    Science.gov (United States)

    Kwok, James Tin-yau; Tsang, Ivor Wai-hung

    2004-11-01

    In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.

  7. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    Science.gov (United States)

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  8. Correction for scatter and septal penetration using convolution subtraction methods and model-based compensation in {sup 123}I brain SPECT imaging-a Monte Carlo study

    Energy Technology Data Exchange (ETDEWEB)

    Larsson, Anne [Department of Radiation Sciences, Radiation Physics, Umeaa University, SE-901 87 Umeaa (Sweden); Ljungberg, Michael [Medical Radiation Physics, Department of Clinical Sciences, Lund, Lund University, SE-221 85 Lund (Sweden); Mo, Susanna Jakobson [Department of Radiation Sciences, Diagnostic Radiology, Umeaa University, SE-901 87 Umeaa (Sweden); Riklund, Katrine [Department of Radiation Sciences, Diagnostic Radiology, Umeaa University, SE-901 87 Umeaa (Sweden); Johansson, Lennart [Department of Radiation Sciences, Radiation Physics, Umeaa University, SE-901 87 Umeaa (Sweden)

    2006-11-21

    Scatter and septal penetration deteriorate contrast and quantitative accuracy in single photon emission computed tomography (SPECT). In this study four different correction techniques for scatter and septal penetration are evaluated for {sup 123}I brain SPECT. One of the methods is a form of model-based compensation which uses the effective source scatter estimation (ESSE) for modelling scatter, and collimator-detector response (CDR) including both geometric and penetration components. The other methods, which operate on the 2D projection images, are convolution scatter subtraction (CSS) and two versions of transmission dependent convolution subtraction (TDCS), one of them proposed by us. This method uses CSS for correction for septal penetration, with a separate kernel, and TDCS for scatter correction. The corrections are evaluated for a dopamine transporter (DAT) study and a study of the regional cerebral blood flow (rCBF), performed with {sup 123}I. The images are produced using a recently developed Monte Carlo collimator routine added to the program SIMIND which can include interactions in the collimator. The results show that the method included in the iterative reconstruction is preferable to the other methods and that the new TDCS version gives better results compared with the other 2D methods.

  9. Correction for scatter and septal penetration using convolution subtraction methods and model-based compensation in 123I brain SPECT imaging-a Monte Carlo study.

    Science.gov (United States)

    Larsson, Anne; Ljungberg, Michael; Mo, Susanna Jakobson; Riklund, Katrine; Johansson, Lennart

    2006-11-21

    Scatter and septal penetration deteriorate contrast and quantitative accuracy in single photon emission computed tomography (SPECT). In this study four different correction techniques for scatter and septal penetration are evaluated for 123I brain SPECT. One of the methods is a form of model-based compensation which uses the effective source scatter estimation (ESSE) for modelling scatter, and collimator-detector response (CDR) including both geometric and penetration components. The other methods, which operate on the 2D projection images, are convolution scatter subtraction (CSS) and two versions of transmission dependent convolution subtraction (TDCS), one of them proposed by us. This method uses CSS for correction for septal penetration, with a separate kernel, and TDCS for scatter correction. The corrections are evaluated for a dopamine transporter (DAT) study and a study of the regional cerebral blood flow (rCBF), performed with 123I. The images are produced using a recently developed Monte Carlo collimator routine added to the program SIMIND which can include interactions in the collimator. The results show that the method included in the iterative reconstruction is preferable to the other methods and that the new TDCS version gives better results compared with the other 2D methods.

  10. Correction for scatter and septal penetration using convolution subtraction methods and model-based compensation in 123I brain SPECT imaging—a Monte Carlo study

    Science.gov (United States)

    Larsson, Anne; Ljungberg, Michael; Jakobson Mo, Susanna; Riklund, Katrine; Johansson, Lennart

    2006-11-01

    Scatter and septal penetration deteriorate contrast and quantitative accuracy in single photon emission computed tomography (SPECT). In this study four different correction techniques for scatter and septal penetration are evaluated for 123I brain SPECT. One of the methods is a form of model-based compensation which uses the effective source scatter estimation (ESSE) for modelling scatter, and collimator-detector response (CDR) including both geometric and penetration components. The other methods, which operate on the 2D projection images, are convolution scatter subtraction (CSS) and two versions of transmission dependent convolution subtraction (TDCS), one of them proposed by us. This method uses CSS for correction for septal penetration, with a separate kernel, and TDCS for scatter correction. The corrections are evaluated for a dopamine transporter (DAT) study and a study of the regional cerebral blood flow (rCBF), performed with 123I. The images are produced using a recently developed Monte Carlo collimator routine added to the program SIMIND which can include interactions in the collimator. The results show that the method included in the iterative reconstruction is preferable to the other methods and that the new TDCS version gives better results compared with the other 2D methods.

  11. The double Mellin-Barnes type integrals and their applications to convolution theory

    CERN Document Server

    Hai, Nguyen Thanh

    1992-01-01

    This book presents new results in the theory of the double Mellin-Barnes integrals popularly known as the general H-function of two variables.A general integral convolution is constructed by the authors and it contains Laplace convolution as a particular case and possesses a factorization property for one-dimensional H-transform. Many examples of convolutions for classical integral transforms are obtained and they can be applied for the evaluation of series and integrals.

  12. An area-efficient 2-D convolution implementation on FPGA for space applications

    OpenAIRE

    Gambardella, Giulio; Tiotto, Gabriele; Prinetto, Paolo Ernesto; Di Carlo, Stefano; Indaco, Marco; Rolfo, Daniele

    2011-01-01

    The 2-D Convolution is an algorithm widely used in image and video processing. Although its computation is simple, its implementation requires a high computational power and an intensive use of memory. Field Programmable Gate Arrays (FPGA) architectures were proposed to accelerate calculations of 2-D Convolution and the use of buffers implemented on FPGAs are used to avoid direct memory access. In this paper we present an implementation of the 2-D Convolution algorithm on a FPGA architecture ...

  13. Kernel Methods for Machine Learning with Life Science Applications

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie

    Kernel methods refer to a family of widely used nonlinear algorithms for machine learning tasks like classification, regression, and feature extraction. By exploiting the so-called kernel trick straightforward extensions of classical linear algorithms are enabled as long as the data only appear...... models to kernel learning, and means for restoring the generalizability in both kernel Principal Component Analysis and the Support Vector Machine are proposed. Viability is proved on a wide range of benchmark machine learning data sets....... as innerproducts in the model formulation. This dissertation presents research on improving the performance of standard kernel methods like kernel Principal Component Analysis and the Support Vector Machine. Moreover, the goal of the thesis has been two-fold. The first part focuses on the use of kernel Principal...

  14. Efficient $\\chi ^{2}$ Kernel Linearization via Random Feature Maps.

    Science.gov (United States)

    Yuan, Xiao-Tong; Wang, Zhenzhen; Deng, Jiankang; Liu, Qingshan

    2016-11-01

    Explicit feature mapping is an appealing way to linearize additive kernels, such as χ(2) kernel for training large-scale support vector machines (SVMs). Although accurate in approximation, feature mapping could pose computational challenges in high-dimensional settings as it expands the original features to a higher dimensional space. To handle this issue in the context of χ(2) kernel SVMs learning, we introduce a simple yet efficient method to approximately linearize χ(2) kernel through random feature maps. The main idea is to use sparse random projection to reduce the dimensionality of feature maps while preserving their approximation capability to the original kernel. We provide approximation error bound for the proposed method. Furthermore, we extend our method to χ(2) multiple kernel SVMs learning. Extensive experiments on large-scale image classification tasks confirm that the proposed approach is able to significantly speed up the training process of the χ(2) kernel SVMs at almost no cost of testing accuracy.

  15. Multiple Kernel Learning in Fisher Discriminant Analysis for Face Recognition

    Directory of Open Access Journals (Sweden)

    Xiao-Zhang Liu

    2013-02-01

    Full Text Available Recent applications and developments based on support vector machines (SVMs have shown that using multiple kernels instead of a single one can enhance classifier performance. However, there are few reports on performance of the kernel‐based Fisher discriminant analysis (kernel‐based FDA method with multiple kernels. This paper proposes a multiple kernel construction method for kernel‐based FDA. The constructed kernel is a linear combination of several base kernels with a constraint on their weights. By maximizing the margin maximization criterion (MMC, we present an iterative scheme for weight optimization. The experiments on the FERET and CMU PIE face databases show that, our multiple kernel Fisher discriminant analysis (MKFD achieves high recognition performance, compared with single‐kernel‐based FDA. The experiments also show that the constructed kernel relaxes parameter selection for kernel‐based FDA to some extent.

  16. A Novel Framework for Learning Geometry-Aware Kernels.

    Science.gov (United States)

    Pan, Binbin; Chen, Wen-Sheng; Xu, Chen; Chen, Bo

    2016-05-01

    The data from real world usually have nonlinear geometric structure, which are often assumed to lie on or close to a low-dimensional manifold in a high-dimensional space. How to detect this nonlinear geometric structure of the data is important for the learning algorithms. Recently, there has been a surge of interest in utilizing kernels to exploit the manifold structure of the data. Such kernels are called geometry-aware kernels and are widely used in the machine learning algorithms. The performance of these algorithms critically relies on the choice of the geometry-aware kernels. Intuitively, a good geometry-aware kernel should utilize additional information other than the geometric information. In many applications, it is required to compute the out-of-sample data directly. However, most of the geometry-aware kernel methods are restricted to the available data given beforehand, with no straightforward extension for out-of-sample data. In this paper, we propose a framework for more general geometry-aware kernel learning. The proposed framework integrates multiple sources of information and enables us to develop flexible and effective kernel matrices. Then, we theoretically show how the learned kernel matrices are extended to the corresponding kernel functions, in which the out-of-sample data can be computed directly. Under our framework, a novel family of geometry-aware kernels is developed. Especially, some existing geometry-aware kernels can be viewed as instances of our framework. The performance of the kernels is evaluated on dimensionality reduction, classification, and clustering tasks. The empirical results show that our kernels significantly improve the performance.

  17. Kernel Density Estimation, Kernel Methods, and Fast Learning in Large Data Sets.

    Science.gov (United States)

    Wang, Shitong; Wang, Jun; Chung, Fu-lai

    2014-01-01

    Kernel methods such as the standard support vector machine and support vector regression trainings take O(N(3)) time and O(N(2)) space complexities in their naïve implementations, where N is the training set size. It is thus computationally infeasible in applying them to large data sets, and a replacement of the naive method for finding the quadratic programming (QP) solutions is highly desirable. By observing that many kernel methods can be linked up with kernel density estimate (KDE) which can be efficiently implemented by some approximation techniques, a new learning method called fast KDE (FastKDE) is proposed to scale up kernel methods. It is based on establishing a connection between KDE and the QP problems formulated for kernel methods using an entropy-based integrated-squared-error criterion. As a result, FastKDE approximation methods can be applied to solve these QP problems. In this paper, the latest advance in fast data reduction via KDE is exploited. With just a simple sampling strategy, the resulted FastKDE method can be used to scale up various kernel methods with a theoretical guarantee that their performance does not degrade a lot. It has a time complexity of O(m(3)) where m is the number of the data points sampled from the training set. Experiments on different benchmarking data sets demonstrate that the proposed method has comparable performance with the state-of-art method and it is effective for a wide range of kernel methods to achieve fast learning in large data sets.

  18. Finding strong lenses in CFHTLS using convolutional neural networks

    Science.gov (United States)

    Jacobs, C.; Glazebrook, K.; Collett, T.; More, A.; McCarthy, C.

    2017-10-01

    We train and apply convolutional neural networks, a machine learning technique developed to learn from and classify image data, to Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) imaging for the identification of potential strong lensing systems. An ensemble of four convolutional neural networks was trained on images of simulated galaxy-galaxy lenses. The training sets consisted of a total of 62 406 simulated lenses and 64 673 non-lens negative examples generated with two different methodologies. An ensemble of trained networks was applied to all of the 171 deg2 of the CFHTLS wide field image data, identifying 18 861 candidates including 63 known and 139 other potential lens candidates. A second search of 1.4 million early-type galaxies selected from the survey catalogue as potential deflectors, identified 2465 candidates including 117 previously known lens candidates, 29 confirmed lenses/high-quality lens candidates, 266 novel probable or potential lenses and 2097 candidates we classify as false positives. For the catalogue-based search we estimate a completeness of 21-28 per cent with respect to detectable lenses and a purity of 15 per cent, with a false-positive rate of 1 in 671 images tested. We predict a human astronomer reviewing candidates produced by the system would identify 20 probable lenses and 100 possible lenses per hour in a sample selected by the robot. Convolutional neural networks are therefore a promising tool for use in the search for lenses in current and forthcoming surveys such as the Dark Energy Survey and the Large Synoptic Survey Telescope.

  19. Learning Contextual Dependence With Convolutional Hierarchical Recurrent Neural Networks

    Science.gov (United States)

    Zuo, Zhen; Shuai, Bing; Wang, Gang; Liu, Xiao; Wang, Xingxing; Wang, Bing; Chen, Yushi

    2016-07-01

    Existing deep convolutional neural networks (CNNs) have shown their great success on image classification. CNNs mainly consist of convolutional and pooling layers, both of which are performed on local image areas without considering the dependencies among different image regions. However, such dependencies are very important for generating explicit image representation. In contrast, recurrent neural networks (RNNs) are well known for their ability of encoding contextual information among sequential data, and they only require a limited number of network parameters. General RNNs can hardly be directly applied on non-sequential data. Thus, we proposed the hierarchical RNNs (HRNNs). In HRNNs, each RNN layer focuses on modeling spatial dependencies among image regions from the same scale but different locations. While the cross RNN scale connections target on modeling scale dependencies among regions from the same location but different scales. Specifically, we propose two recurrent neural network models: 1) hierarchical simple recurrent network (HSRN), which is fast and has low computational cost; and 2) hierarchical long-short term memory recurrent network (HLSTM), which performs better than HSRN with the price of more computational cost. In this manuscript, we integrate CNNs with HRNNs, and develop end-to-end convolutional hierarchical recurrent neural networks (C-HRNNs). C-HRNNs not only make use of the representation power of CNNs, but also efficiently encodes spatial and scale dependencies among different image regions. On four of the most challenging object/scene image classification benchmarks, our C-HRNNs achieve state-of-the-art results on Places 205, SUN 397, MIT indoor, and competitive results on ILSVRC 2012.

  20. 基于卷积神经网络的图像识别算法设计与实现%Design and Implementation of Image Recognition Algorithm Based on Convolutional Neural Networks

    Institute of Scientific and Technical Information of China (English)

    王振; 高茂庭

    2015-01-01

    Convolutional neural networks has achieved a great success in image recognition. The structure of the network has a great impact on the performance and accuracy in image recognition. To improve the performance of this algorithm, designs and implements a new architecture of the convolutional neural network by using convolutional layers with small kernel size repeatedly, which will reduce the number of training parameters effectively and increase the recognition accuracy. Compared with the state-of-art results in ILSVRC, experiments demonstrate the effectiveness of the new network architecture.%卷积神经网络在图像识别领域取得很好的效果,但其网络结构对图像识别的效果和效率有较大的影响,为改善识别性能,通过重复使用较小卷积核,设计并实现一种新的卷积神经网络结构,有效地减少训练参数的数量,并能够提高识别的准确率。与图像识别领域当前具有世界先进水平的ILSVRC挑战赛中取得较好成绩的算法对比实验,验证这种结构的有效性。

  1. Continuous speech recognition based on convolutional neural network

    Science.gov (United States)

    Zhang, Qing-qing; Liu, Yong; Pan, Jie-lin; Yan, Yong-hong

    2015-07-01

    Convolutional Neural Networks (CNNs), which showed success in achieving translation invariance for many image processing tasks, are investigated for continuous speech recognitions in the paper. Compared to Deep Neural Networks (DNNs), which have been proven to be successful in many speech recognition tasks nowadays, CNNs can reduce the NN model sizes significantly, and at the same time achieve even better recognition accuracies. Experiments on standard speech corpus TIMIT showed that CNNs outperformed DNNs in the term of the accuracy when CNNs had even smaller model size.

  2. Training Convolutional Neural Networks for Translational Invariance on SAR ATR

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Engholm, Rasmus; Østergaard Pedersen, Morten

    2016-01-01

    In this paper we present a comparison of the robustness of Convolutional Neural Networks (CNN) to other classifiers in the presence of uncertainty of the objects localization in SAR image. We present a framework for simulating simple SAR images, translating the object of interest systematically...... and testing the classification performance. Our results show that where other classification methods are very sensitive to even small translations, CNN is quite robust to translational variance, making it much more useful in relation to Automatic Target Recognition (ATR) in a real life context....

  3. Convolution Algebra for Fluid Modes with Finite Energy

    Science.gov (United States)

    1992-04-01

    PHILLIPS LABORATORY AIR FORCE SYSTEMS COMMAND UNITED STATES AIR FORCE HANSCOM AIR FORCE BASE, MASSACHIUSETTS 01731-5000 94-22604 "This technical report ’-as...with finite spatial and temporal extents. At Boston University, we have developed a full form of wavelet expansion which has the advantage over more...distribution: 00 bX =00 0l if, TZ< VPf (X) = V •a,,,’(x) = E bnb 𔄀(x) where b, =otherwise (34) V=o ,i=o a._, otherwise 7 The convolution of two

  4. Plant species classification using deep convolutional neural network

    DEFF Research Database (Denmark)

    Dyrmann, Mads; Karstoft, Henrik; Midtiby, Henrik Skov

    2016-01-01

    Information on which weed species are present within agricultural fields is important for site specific weed management. This paper presents a method that is capable of recognising plant species in colour images by using a convolutional neural network. The network is built from scratch trained...... stabilisation and illumination, and images shot with hand-held mobile phones in fields with changing lighting conditions and different soil types. For these 22 species, the network is able to achieve a classification accuracy of 86.2%....

  5. Convolutional neural networks for synthetic aperture radar classification

    Science.gov (United States)

    Profeta, Andrew; Rodriguez, Andres; Clouse, H. Scott

    2016-05-01

    For electro-optical object recognition, convolutional neural networks (CNNs) are the state-of-the-art. For large datasets, CNNs are able to learn meaningful features used for classification. However, their application to synthetic aperture radar (SAR) has been limited. In this work we experimented with various CNN architectures on the MSTAR SAR dataset. As the input to the CNN we used the magnitude and phase (2 channels) of the SAR imagery. We used the deep learning toolboxes CAFFE and Torch7. Our results show that we can achieve 93% accuracy on the MSTAR dataset using CNNs.

  6. A Fortran 90 code for magnetohydrodynamics. Part 1, Banded convolution

    Energy Technology Data Exchange (ETDEWEB)

    Walker, D.W.

    1992-03-01

    This report describes progress in developing a Fortran 90 version of the KITE code for studying plasma instabilities in Tokamaks. In particular, the evaluation of convolution terms appearing in the numerical solution is discussed, and timing results are presented for runs performed on an 8k processor Connection Machine (CM-2). Estimates of the performance on a full-size 64k CM-2 are given, and range between 100 and 200 Mflops. The advantages of having a Fortran 90 version of the KITE code are stressed, and the future use of such a code on the newly announced CM5 and Paragon computers, from Thinking Machines Corporation and Intel, is considered.

  7. Image reconstruction of simulated specimens using convolution back projection

    Directory of Open Access Journals (Sweden)

    Mohd. Farhan Manzoor

    2001-04-01

    Full Text Available This paper reports about the reconstruction of cross-sections of composite structures. The convolution back projection (CBP algorithm has been used to capture the attenuation field over the specimen. Five different test cases have been taken up for evaluation. These cases represent varying degrees of complexity. In addition, the role of filters on the nature of the reconstruction errors has also been discussed. Numerical results obtained in the study reveal that CBP algorithm is a useful tool for qualitative as well as quantitative assessment of composite regions encountered in engineering applications.

  8. Faster GPU-based convolutional gridding via thread coarsening

    CERN Document Server

    Merry, Bruce

    2016-01-01

    Convolutional gridding is a processor-intensive step in interferometric imaging. While it is possible to use graphics processing units (GPUs) to accelerate this operation, existing methods use only a fraction of the available flops. We apply thread coarsening to improve the efficiency of an existing algorithm, and observe performance gains of up to $3.2\\times$ for single-polarization gridding and $1.9\\times$ for quad-polarization gridding on a GeForce GTX 980, and smaller but still significant gains on a Radeon R9 290X.

  9. Convolution seal for transition duct in turbine system

    Energy Technology Data Exchange (ETDEWEB)

    Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray

    2015-03-10

    A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface member for interfacing with a turbine section. The turbine system further includes a convolution seal contacting the interface member to provide a seal between the interface member and the turbine section.

  10. Convolution seal for transition duct in turbine system

    Energy Technology Data Exchange (ETDEWEB)

    Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray

    2015-05-26

    A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface feature for interfacing with an adjacent transition duct. The turbine system further includes a convolution seal contacting the interface feature to provide a seal between the interface feature and the adjacent transition duct.

  11. Tandem mass spectrometry data quality assessment by self-convolution

    Directory of Open Access Journals (Sweden)

    Tham Wai

    2007-09-01

    Full Text Available Abstract Background Many algorithms have been developed for deciphering the tandem mass spectrometry (MS data sets. They can be essentially clustered into two classes. The first performs searches on theoretical mass spectrum database, while the second based itself on de novo sequencing from raw mass spectrometry data. It was noted that the quality of mass spectra affects significantly the protein identification processes in both instances. This prompted the authors to explore ways to measure the quality of MS data sets before subjecting them to the protein identification algorithms, thus allowing for more meaningful searches and increased confidence level of proteins identified. Results The proposed method measures the qualities of MS data sets based on the symmetric property of b- and y-ion peaks present in a MS spectrum. Self-convolution on MS data and its time-reversal copy was employed. Due to the symmetric nature of b-ions and y-ions peaks, the self-convolution result of a good spectrum would produce a highest mid point intensity peak. To reduce processing time, self-convolution was achieved using Fast Fourier Transform and its inverse transform, followed by the removal of the "DC" (Direct Current component and the normalisation of the data set. The quality score was defined as the ratio of the intensity at the mid point to the remaining peaks of the convolution result. The method was validated using both theoretical mass spectra, with various permutations, and several real MS data sets. The results were encouraging, revealing a high percentage of positive prediction rates for spectra with good quality scores. Conclusion We have demonstrated in this work a method for determining the quality of tandem MS data set. By pre-determining the quality of tandem MS data before subjecting them to protein identification algorithms, spurious protein predictions due to poor tandem MS data are avoided, giving scientists greater confidence in the

  12. Visualizing Vector Fields Using Line Integral Convolution and Dye Advection

    Science.gov (United States)

    Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu

    1996-01-01

    We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.

  13. Medical image fusion using the convolution of Meridian distributions.

    Science.gov (United States)

    Agrawal, Mayank; Tsakalides, Panagiotis; Achim, Alin

    2010-01-01

    The aim of this paper is to introduce a novel non-Gaussian statistical model-based approach for medical image fusion based on the Meridian distribution. The paper also includes a new approach to estimate the parameters of generalized Cauchy distribution. The input images are first decomposed using the Dual-Tree Complex Wavelet Transform (DT-CWT) with the subband coefficients modelled as Meridian random variables. Then, the convolution of Meridian distributions is applied as a probabilistic prior to model the fused coefficients, and the weights used to combine the source images are optimised via Maximum Likelihood (ML) estimation. The superior performance of the proposed method is demonstrated using medical images.

  14. Faster GPU-based convolutional gridding via thread coarsening

    Science.gov (United States)

    Merry, B.

    2016-07-01

    Convolutional gridding is a processor-intensive step in interferometric imaging. While it is possible to use graphics processing units (GPUs) to accelerate this operation, existing methods use only a fraction of the available flops. We apply thread coarsening to improve the efficiency of an existing algorithm, and observe performance gains of up to 3.2 × for single-polarization gridding and 1.9 × for quad-polarization gridding on a GeForce GTX 980, and smaller but still significant gains on a Radeon R9 290X.

  15. Low-dose CT denoising with convolutional neural network

    CERN Document Server

    Chen, Hu; Zhang, Weihua; Liao, Peixi; Li, Ke; Zhou, Jiliu; Wang, Ge

    2016-01-01

    To reduce the potential radiation risk, low-dose CT has attracted much attention. However, simply lowering the radiation dose will lead to significant deterioration of the image quality. In this paper, we propose a noise reduction method for low-dose CT via deep neural network without accessing original projection data. A deep convolutional neural network is trained to transform low-dose CT images towards normal-dose CT images, patch by patch. Visual and quantitative evaluation demonstrates a competing performance of the proposed method.

  16. Wilson Dslash Kernel From Lattice QCD Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Joo, Balint [Jefferson Lab, Newport News, VA; Smelyanskiy, Mikhail [Parallel Computing Lab, Intel Corporation, California, USA; Kalamkar, Dhiraj D. [Parallel Computing Lab, Intel Corporation, India; Vaidyanathan, Karthikeyan [Parallel Computing Lab, Intel Corporation, India

    2015-07-01

    Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.

  17. Learning Potential Energy Landscapes using Graph Kernels

    CERN Document Server

    Ferré, G; Barros, K

    2016-01-01

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab-initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. We show on a standard benchmark that our Graph Approximated Energy (GRAPE) method is competitive with state of the art kernel m...

  18. Viability Kernel for Ecosystem Management Models

    CERN Document Server

    Anaya, Eladio Ocana; Oliveros--Ramos, Ricardo; Tam, Jorge

    2009-01-01

    We consider sustainable management issues formulated within the framework of control theory. The problem is one of controlling a discrete--time dynamical system (e.g. population model) in the presence of state and control constraints, representing conflicting economic and ecological issues for instance. The viability kernel is known to play a basic role for the analysis of such problems and the design of viable control feedbacks, but its computation is not an easy task in general. We study the viability of nonlinear generic ecosystem models under preservation and production constraints. Under simple conditions on the growth rates at the boundary constraints, we provide an explicit description of the viability kernel. A numerical illustration is given for the hake--anchovy couple in the Peruvian upwelling ecosystem.

  19. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    of PCA and related techniques. An interesting dilemma in reduction of dimensionality of data is the desire to obtain simplicity for better understanding, visualization and interpretation of the data on the one hand, and the desire to retain sufficient detail for adequate representation on the other hand......Based on work by Pearson in 1901, Hotelling in 1933 introduced principal component analysis (PCA). PCA is often used for general feature generation and linear orthogonalization or compression by dimensionality reduction of correlated multivariate data, see Jolliffe for a comprehensive description...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  20. Quark-hadron duality: pinched kernel approch

    CERN Document Server

    Dominguez, C A; Schilcher, K; Spiesberger, H

    2016-01-01

    Hadronic spectral functions measured by the ALEPH collaboration in the vector and axial-vector channels are used to study potential quark-hadron duality violations (DV). This is done entirely in the framework of pinched kernel finite energy sum rules (FESR), i.e. in a model independent fashion. The kinematical range of the ALEPH data is effectively extended up to $s = 10\\; {\\mbox{GeV}^2}$ by using an appropriate kernel, and assuming that in this region the spectral functions are given by perturbative QCD. Support for this assumption is obtained by using $e^+ e^-$ annihilation data in the vector channel. Results in both channels show a good saturation of the pinched FESR, without further need of explicit models of DV.

  1. Analog Forecasting with Dynamics-Adapted Kernels

    CERN Document Server

    Zhao, Zhizhen

    2014-01-01

    Analog forecasting is a non-parametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from state-space reconstruction for dynamical systems and kernel methods developed in harmonic analysis and machine learning. The first improvement is to augment the dimension of the initial data using Takens' delay-coordinate maps to recover information in the initial data lost through partial observations. Then, instead of using Euclidean distances between the states, weighted ensembles of analogs are constructed according to similarity kernels in delay-coordinate space, featuring an explicit dependence on the dynamical vector field generating the data. The eigenvalues and eigenfunctions ...

  2. The Solutions of the -Dimensional Bessel Diamond Operator and the Fourier–Bessel Transform of their Convolution

    Indian Academy of Sciences (India)

    Hüseyin Yildirim; M Zeki Sarikaya; Sermin Öztürk

    2004-11-01

    In this article, the operator $\\Diamond^k_B$ is introduced and named as the Bessel diamond operator iterated times and is defined by $$\\Diamond^k_B=[(B_{x_1}+B_{x_2}+\\cdots +B_{x_p})^2-(B_{x_{p+1}}+\\cdots +B_{x_{p+q}})^2]^k,$$ where $p + q = n, B_{x_i} = \\frac{^2}{ x^2_i}+\\frac{2_i}{x_i}\\frac{}{ x_i}$, where $2_i = 2_i + 1, _i > - \\frac{1}{2} [8], x_i > 0, i = 1, 2,\\ldots ,n, k$ is a non-negative integer and is the dimension of $\\mathbb{R}^+_n$. In this work we study the elementary solution of the Bessel diamond operator and the elementary solution of the operator $\\Diamond^k_B$ is called the Bessel diamond kernel of Riesz. Then, we study the Fourier–Bessel transform of the elementary solution and also the Fourier–Bessel transform of their convolution.

  3. Searching and Indexing Genomic Databases via Kernelization

    Directory of Open Access Journals (Sweden)

    Travis eGagie

    2015-02-01

    Full Text Available The rapid advance of DNA sequencing technologies has yielded databases of thousands of genomes. To search and index these databases effectively, it is important that we take advantage of the similarity between those genomes. Several authors have recently suggested searching or indexing only one reference genome and the parts of the other genomes where they differ. In this paper we survey the twenty-year history of this idea and discuss its relation to kernelization in parameterized complexity.

  4. Kernel based subspace projection of hyperspectral images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF). Th......). The MAF projection exploits the fact that interesting phenomena in images typically exhibit spatial autocorrelation. The analysis is based on nearinfrared hyperspectral images of maize grains demonstrating the superiority of the kernelbased MAF method....

  5. Wheat kernel dimensions: how do they contribute to kernel weight at an individual QTL level?

    Indian Academy of Sciences (India)

    Fa Cui; Anming Ding; Jun Li; Chunhua Zhao; Xingfeng Li; Deshun Feng; Xiuqin Wang; Lin Wang; Jurong Gao; Honggang Wang

    2011-12-01

    Kernel dimensions (KD) contribute greatly to thousand-kernel weight (TKW) in wheat. In the present study, quantitative trait loci (QTL) for TKW, kernel length (KL), kernel width (KW) and kernel diameter ratio (KDR) were detected by both conditional and unconditional QTL mapping methods. Two related F8:9 recombinant inbred line (RIL) populations, comprising 485 and 229 lines, respectively, were used in this study, and the trait phenotypes were evaluated in four environments. Unconditional QTL mapping analysis detected 77 additive QTL for four traits in two populations. Of these, 24 QTL were verified in at least three trials, and five of them were major QTL, thus being of great value for marker assisted selection in breeding programmes. Conditional QTL mapping analysis, compared with unconditional QTL mapping analysis, resulted in reduction in the number of QTL for TKW due to the elimination of TKW variations caused by its conditional traits; based on which we first dissected genetic control system involved in the synthetic process between TKW and KD at an individual QTL level. Results indicated that, at the QTL level, KW had the strongest influence on TKW, followed by KL, and KDR had the lowest level contribution to TKW. In addition, the present study proved that it is not all-inclusive to determine genetic relationships of a pairwise QTL for two related/causal traits based on whether they were co-located. Thus, conditional QTL mapping method should be used to evaluate possible genetic relationships of two related/causal traits.

  6. Absolute Orientation Based on Distance Kernel Functions

    Directory of Open Access Journals (Sweden)

    Yanbiao Sun

    2016-03-01

    Full Text Available The classical absolute orientation method is capable of transforming tie points (TPs from a local coordinate system to a global (geodetic coordinate system. The method is based only on a unique set of similarity transformation parameters estimated by minimizing the total difference between all ground control points (GCPs and the fitted points. Nevertheless, it often yields a transformation with poor accuracy, especially in large-scale study cases. To address this problem, this study proposes a novel absolute orientation method based on distance kernel functions, in which various sets of similarity transformation parameters instead of only one set are calculated. When estimating the similarity transformation parameters for TPs using the iterative solution of a non-linear least squares problem, we assigned larger weighting matrices for the GCPs for which the distances from the point are short. The weighting matrices can be evaluated using the distance kernel function as a function of the distances between the GCPs and the TPs. Furthermore, we used the exponential function and the Gaussian function to describe distance kernel functions in this study. To validate and verify the proposed method, six synthetic and two real datasets were tested. The accuracy was significantly improved by the proposed method when compared to the classical method, although a higher computational complexity is experienced.

  7. Physicochemical Properties of Palm Kernel Oil

    Directory of Open Access Journals (Sweden)

    Amira P. Olaniyi

    2014-09-01

    Full Text Available Physicochemical analyses were carried out on palm kernel oil (Adin and the following results were obtained: Saponification value; 280.5±56.1 mgKOH/g, acid value; 2.7±0.3 mg KOH/g, Free Fatty Acid (FFA; 1.35±0.15 KOH/g, ester value; 277.8±56.4 mgKOH/g, peroxide value; 14.3±0.8 mEq/kg; iodine value; 15.86±4.02 mgKOH/g, Specific Gravity (S.G value; 0.904, refractive index; 1.412 and inorganic materials; 1.05%. Its odour and colour were heavy burnt smell and burnt brown, respectively. These values were compared with those obtained for groundnut and coconut oils. It was found that the physico-chemical properties of palm kernel oil are comparable to those of groundnut and coconut oils except for the peroxide value (i.e., 14.3±0.8 mEq which was not detectable in groundnut and coconut oils. Also the odour of both groundnut and coconut oils were pleasant while that of the palm kernel oil was not as pleasant (i.e., heavy burnt smell.

  8. A Fast Reduced Kernel Extreme Learning Machine.

    Science.gov (United States)

    Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua

    2016-04-01

    In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred.

  9. Kernel methods for phenotyping complex plant architecture.

    Science.gov (United States)

    Kawamura, Koji; Hibrand-Saint Oyant, Laurence; Foucher, Fabrice; Thouroude, Tatiana; Loustau, Sébastien

    2014-02-07

    The Quantitative Trait Loci (QTL) mapping of plant architecture is a critical step for understanding the genetic determinism of plant architecture. Previous studies adopted simple measurements, such as plant-height, stem-diameter and branching-intensity for QTL mapping of plant architecture. Many of these quantitative traits were generally correlated to each other, which give rise to statistical problem in the detection of QTL. We aim to test the applicability of kernel methods to phenotyping inflorescence architecture and its QTL mapping. We first test Kernel Principal Component Analysis (KPCA) and Support Vector Machines (SVM) over an artificial dataset of simulated inflorescences with different types of flower distribution, which is coded as a sequence of flower-number per node along a shoot. The ability of discriminating the different inflorescence types by SVM and KPCA is illustrated. We then apply the KPCA representation to the real dataset of rose inflorescence shoots (n=1460) obtained from a 98 F1 hybrid mapping population. We find kernel principal components with high heritability (>0.7), and the QTL analysis identifies a new QTL, which was not detected by a trait-by-trait analysis of simple architectural measurements. The main tools developed in this paper could be use to tackle the general problem of QTL mapping of complex (sequences, 3D structure, graphs) phenotypic traits.

  10. Laguerre Kernels –Based SVM for Image Classification

    Directory of Open Access Journals (Sweden)

    Ashraf Afifi

    2014-01-01

    Full Text Available Support vector machines (SVMs have been promising methods for classification and regression analysis because of their solid mathematical foundations which convey several salient properties that other methods hardly provide. However the performance of SVMs is very sensitive to how the kernel function is selected, the challenge is to choose the kernel function for accurate data classification. In this paper, we introduce a set of new kernel functions derived from the generalized Laguerre polynomials. The proposed kernels could improve the classification accuracy of SVMs for both linear and nonlinear data sets. The proposed kernel functions satisfy Mercer’s condition and orthogonally properties which are important and useful in some applications when the support vector number is needed as in feature selection. The performance of the generalized Laguerre kernels is evaluated in comparison with the existing kernels. It was found that the choice of the kernel function, and the values of the parameters for that kernel are critical for a given amount of data. The proposed kernels give good classification accuracy in nearly all the data sets, especially those of high dimensions.

  11. Identification of Fusarium damaged wheat kernels using image analysis

    Directory of Open Access Journals (Sweden)

    Ondřej Jirsa

    2011-01-01

    Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.

  12. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    Science.gov (United States)

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  13. Digital image correlation based on a fast convolution strategy

    Science.gov (United States)

    Yuan, Yuan; Zhan, Qin; Xiong, Chunyang; Huang, Jianyong

    2017-10-01

    In recent years, the efficiency of digital image correlation (DIC) methods has attracted increasing attention because of its increasing importance for many engineering applications. Based on the classical affine optical flow (AOF) algorithm and the well-established inverse compositional Gauss-Newton algorithm, which is essentially a natural extension of the AOF algorithm under a nonlinear iterative framework, this paper develops a set of fast convolution-based DIC algorithms for high-efficiency subpixel image registration. Using a well-developed fast convolution technique, the set of algorithms establishes a series of global data tables (GDTs) over the digital images, which allows the reduction of the computational complexity of DIC significantly. Using the pre-calculated GDTs, the subpixel registration calculations can be implemented efficiently in a look-up-table fashion. Both numerical simulation and experimental verification indicate that the set of algorithms significantly enhances the computational efficiency of DIC, especially in the case of a dense data sampling for the digital images. Because the GDTs need to be computed only once, the algorithms are also suitable for efficiently coping with image sequences that record the time-varying dynamics of specimen deformations.

  14. Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks.

    Science.gov (United States)

    Dosovitskiy, Alexey; Fischer, Philipp; Springenberg, Jost Tobias; Riedmiller, Martin; Brox, Thomas

    2016-09-01

    Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convolutional network using only unlabeled data. To this end, we train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. In contrast to supervised network training, the resulting feature representation is not class specific. It rather provides robustness to the transformations that have been applied during training. This generic feature representation allows for classification results that outperform the state of the art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101, Caltech-256). While features learned with our approach cannot compete with class specific features from supervised training on a classification task, we show that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor.

  15. Multichannel Convolutional Neural Network for Biological Relation Extraction

    Science.gov (United States)

    Quan, Chanqin; Sun, Xiao; Bai, Wenjun

    2016-01-01

    The plethora of biomedical relations which are embedded in medical logs (records) demands researchers' attention. Previous theoretical and practical focuses were restricted on traditional machine learning techniques. However, these methods are susceptible to the issues of “vocabulary gap” and data sparseness and the unattainable automation process in feature extraction. To address aforementioned issues, in this work, we propose a multichannel convolutional neural network (MCCNN) for automated biomedical relation extraction. The proposed model has the following two contributions: (1) it enables the fusion of multiple (e.g., five) versions in word embeddings; (2) the need for manual feature engineering can be obviated by automated feature learning with convolutional neural network (CNN). We evaluated our model on two biomedical relation extraction tasks: drug-drug interaction (DDI) extraction and protein-protein interaction (PPI) extraction. For DDI task, our system achieved an overall f-score of 70.2% compared to the standard linear SVM based system (e.g., 67.0%) on DDIExtraction 2013 challenge dataset. And for PPI task, we evaluated our system on Aimed and BioInfer PPI corpus; our system exceeded the state-of-art ensemble SVM system by 2.7% and 5.6% on f-scores. PMID:28053977

  16. Robust Visual Tracking via Convolutional Networks Without Training.

    Science.gov (United States)

    Kaihua Zhang; Qingshan Liu; Yi Wu; Ming-Hsuan Yang

    2016-04-01

    Deep networks have been successfully applied to visual tracking by learning a generic representation offline from numerous training images. However, the offline training is time-consuming and the learned generic representation may be less discriminative for tracking specific objects. In this paper, we present that, even without offline training with a large amount of auxiliary data, simple two-layer convolutional networks can be powerful enough to learn robust representations for visual tracking. In the first frame, we extract a set of normalized patches from the target region as fixed filters, which integrate a series of adaptive contextual filters surrounding the target to define a set of feature maps in the subsequent frames. These maps measure similarities between each filter and useful local intensity patterns across the target, thereby encoding its local structural information. Furthermore, all the maps together form a global representation, via which the inner geometric layout of the target is also preserved. A simple soft shrinkage method that suppresses noisy values below an adaptive threshold is employed to de-noise the global representation. Our convolutional networks have a lightweight structure and perform favorably against several state-of-the-art methods on the recent tracking benchmark data set with 50 challenging videos.

  17. Convolutional neural network features based change detection in satellite images

    Science.gov (United States)

    Mohammed El Amin, Arabi; Liu, Qingjie; Wang, Yunhong

    2016-07-01

    With the popular use of high resolution remote sensing (HRRS) satellite images, a huge research efforts have been placed on change detection (CD) problem. An effective feature selection method can significantly boost the final result. While hand-designed features have proven difficulties to design features that effectively capture high and mid-level representations, the recent developments in machine learning (Deep Learning) omit this problem by learning hierarchical representation in an unsupervised manner directly from data without human intervention. In this letter, we propose approaching the change detection problem from a feature learning perspective. A novel deep Convolutional Neural Networks (CNN) features based HR satellite images change detection method is proposed. The main guideline is to produce a change detection map directly from two images using a pretrained CNN. This method can omit the limited performance of hand-crafted features. Firstly, CNN features are extracted through different convolutional layers. Then, a concatenation step is evaluated after an normalization step, resulting in a unique higher dimensional feature map. Finally, a change map was computed using pixel-wise Euclidean distance. Our method has been validated on real bitemporal HRRS satellite images according to qualitative and quantitative analyses. The results obtained confirm the interest of the proposed method.

  18. Convolution approach to the piNN system

    CERN Document Server

    Blankleider, B

    1994-01-01

    The unitary NN-piNN model contains a serious theoretical flaw: unitarity is obtained at the price of having to use an effective piNN coupling constant that is smaller than the experimental one. This is but one aspect of a more general renormalization problem whose origin lies in the truncation of Hilbert space used to derive the equations. Here we present a new theoretical approach to the piNN problem where unitary equations are obtained without having to truncate Hilbert space. Indeed, the only approximation made is the neglect of connected three-body forces. As all possible dressings of one-particle propagators and vertices are retained in our model, we overcome the renormalization problems inherent in previous piNN theories. The key element of our derivation is the use of convolution integrals that have enabled us to sum all the possible disconnected time-ordered graphs. We also discuss how the convolution method can be extended to sum all the time orderings of a connected graph. This has enabled us to cal...

  19. Real-Time Video Convolutional Face Finder on Embedded Platforms

    Directory of Open Access Journals (Sweden)

    Franck Mamalet

    2007-03-01

    Full Text Available A high-level optimization methodology is applied for implementing the well-known convolutional face finder (CFF algorithm for real-time applications on mobile phones, such as teleconferencing, advanced user interfaces, image indexing, and security access control. CFF is based on a feature extraction and classification technique which consists of a pipeline of convolutions and subsampling operations. The design of embedded systems requires a good trade-off between performance and code size due to the limited amount of available resources. The followed methodology copes with the main drawbacks of the original implementation of CFF such as floating-point computation and memory allocation, in order to allow parallelism exploitation and perform algorithm optimizations. Experimental results show that our embedded face detection system can accurately locate faces with less computational load and memory cost. It runs on a 275 MHz Starcore DSP at 35 QCIF images/s with state-of-the-art detection rates and very low false alarm rates.

  20. A Mathematical Motivation for Complex-Valued Convolutional Networks.

    Science.gov (United States)

    Tygert, Mark; Bruna, Joan; Chintala, Soumith; LeCun, Yann; Piantino, Serkan; Szlam, Arthur

    2016-05-01

    A complex-valued convolutional network (convnet) implements the repeated application of the following composition of three operations, recursively applying the composition to an input vector of nonnegative real numbers: (1) convolution with complex-valued vectors, followed by (2) taking the absolute value of every entry of the resulting vectors, followed by (3) local averaging. For processing real-valued random vectors, complex-valued convnets can be viewed as data-driven multiscale windowed power spectra, data-driven multiscale windowed absolute spectra, data-driven multiwavelet absolute values, or (in their most general configuration) data-driven nonlinear multiwavelet packets. Indeed, complex-valued convnets can calculate multiscale windowed spectra when the convnet filters are windowed complex-valued exponentials. Standard real-valued convnets, using rectified linear units (ReLUs), sigmoidal (e.g., logistic or tanh) nonlinearities, or max pooling, for example, do not obviously exhibit the same exact correspondence with data-driven wavelets (whereas for complex-valued convnets, the correspondence is much more than just a vague analogy). Courtesy of the exact correspondence, the remarkably rich and rigorous body of mathematical analysis for wavelets applies directly to (complex-valued) convnets.

  1. Classification of Histology Sections via Multispectral Convolutional Sparse Coding.

    Science.gov (United States)

    Zhou, Yin; Chang, Hang; Barner, Kenneth; Spellman, Paul; Parvin, Bahram

    2014-06-01

    Image-based classification of histology sections plays an important role in predicting clinical outcomes. However this task is very challenging due to the presence of large technical variations (e.g., fixation, staining) and biological heterogeneities (e.g., cell type, cell state). In the field of biomedical imaging, for the purposes of visualization and/or quantification, different stains are typically used for different targets of interest (e.g., cellular/subcellular events), which generates multi-spectrum data (images) through various types of microscopes and, as a result, provides the possibility of learning biological-component-specific features by exploiting multispectral information. We propose a multispectral feature learning model that automatically learns a set of convolution filter banks from separate spectra to efficiently discover the intrinsic tissue morphometric signatures, based on convolutional sparse coding (CSC). The learned feature representations are then aggregated through the spatial pyramid matching framework (SPM) and finally classified using a linear SVM. The proposed system has been evaluated using two large-scale tumor cohorts, collected from The Cancer Genome Atlas (TCGA). Experimental results show that the proposed model 1) outperforms systems utilizing sparse coding for unsupervised feature learning (e.g., PSD-SPM [5]); 2) is competitive with systems built upon features with biological prior knowledge (e.g., SMLSPM [4]).

  2. Convolutional Neural Network Based Fault Detection for Rotating Machinery

    Science.gov (United States)

    Janssens, Olivier; Slavkovikj, Viktor; Vervisch, Bram; Stockman, Kurt; Loccufier, Mia; Verstockt, Steven; Van de Walle, Rik; Van Hoecke, Sofie

    2016-09-01

    Vibration analysis is a well-established technique for condition monitoring of rotating machines as the vibration patterns differ depending on the fault or machine condition. Currently, mainly manually-engineered features, such as the ball pass frequencies of the raceway, RMS, kurtosis an crest, are used for automatic fault detection. Unfortunately, engineering and interpreting such features requires a significant level of human expertise. To enable non-experts in vibration analysis to perform condition monitoring, the overhead of feature engineering for specific faults needs to be reduced as much as possible. Therefore, in this article we propose a feature learning model for condition monitoring based on convolutional neural networks. The goal of this approach is to autonomously learn useful features for bearing fault detection from the data itself. Several types of bearing faults such as outer-raceway faults and lubrication degradation are considered, but also healthy bearings and rotor imbalance are included. For each condition, several bearings are tested to ensure generalization of the fault-detection system. Furthermore, the feature-learning based approach is compared to a feature-engineering based approach using the same data to objectively quantify their performance. The results indicate that the feature-learning system, based on convolutional neural networks, significantly outperforms the classical feature-engineering based approach which uses manually engineered features and a random forest classifier. The former achieves an accuracy of 93.61 percent and the latter an accuracy of 87.25 percent.

  3. Asymptotic formula for the moments of Bernoulli convolutions

    Directory of Open Access Journals (Sweden)

    E. A. Timofeev

    2016-01-01

    Full Text Available Abstract. Asymptotic Formula for the Moments of Bernoulli Convolutions Timofeev E. A. Received February 8, 2016 For each λ, 0 < λ < 1, we define a random variable ∞ Yλ =(1−λξnλn, n=0 where ξn are independent random variables with P{ξn =0}=P{ξn =1}= 1. 2 The distribution of Yλ is called a symmetric Bernoulli convolution. The main result of this paper is Mn =EYλn =nlogλ22logλ(1−λ+0.5logλ2−0.5eτ(−logλn1+O(n−0.99, where is a 1-periodic function, 1k2πikx τ(x= kα −lnλ e k̸=0 1 (1 − λ2πit(1 − 22πitπ−2πit2−2πitζ(2πit, 2i sh(π2t α(t = − and ζ(z is the Riemann zeta function. The article is published in the author’s wording.

  4. Transforming Musical Signals through a Genre Classifying Convolutional Neural Network

    Science.gov (United States)

    Geng, S.; Ren, G.; Ogihara, M.

    2017-05-01

    Convolutional neural networks (CNNs) have been successfully applied on both discriminative and generative modeling for music-related tasks. For a particular task, the trained CNN contains information representing the decision making or the abstracting process. One can hope to manipulate existing music based on this 'informed' network and create music with new features corresponding to the knowledge obtained by the network. In this paper, we propose a method to utilize the stored information from a CNN trained on musical genre classification task. The network was composed of three convolutional layers, and was trained to classify five-second song clips into five different genres. After training, randomly selected clips were modified by maximizing the sum of outputs from the network layers. In addition to the potential of such CNNs to produce interesting audio transformation, more information about the network and the original music could be obtained from the analysis of the generated features since these features indicate how the network 'understands' the music.

  5. Single-Cell Phenotype Classification Using Deep Convolutional Neural Networks.

    Science.gov (United States)

    Dürr, Oliver; Sick, Beate

    2016-10-01

    Deep learning methods are currently outperforming traditional state-of-the-art computer vision algorithms in diverse applications and recently even surpassed human performance in object recognition. Here we demonstrate the potential of deep learning methods to high-content screening-based phenotype classification. We trained a deep learning classifier in the form of convolutional neural networks with approximately 40,000 publicly available single-cell images from samples treated with compounds from four classes known to lead to different phenotypes. The input data consisted of multichannel images. The construction of appropriate feature definitions was part of the training and carried out by the convolutional network, without the need for expert knowledge or handcrafted features. We compare our results against the recent state-of-the-art pipeline in which predefined features are extracted from each cell using specialized software and then fed into various machine learning algorithms (support vector machine, Fisher linear discriminant, random forest) for classification. The performance of all classification approaches is evaluated on an untouched test image set with known phenotype classes. Compared to the best reference machine learning algorithm, the misclassification rate is reduced from 8.9% to 6.6%.

  6. Enhancing Neutron Beam Production with a Convoluted Moderator

    Energy Technology Data Exchange (ETDEWEB)

    Iverson, Erik B [ORNL; Baxter, David V [Center for the Exploration of Energy and Matter, Indiana University; Muhrer, Guenter [Los Alamos National Laboratory (LANL); Ansell, Stuart [ISIS Facility, Rutherford Appleton Laboratory (ISIS); Gallmeier, Franz X [ORNL; Dalgliesh, Robert [ISIS Facility, Rutherford Appleton Laboratory (ISIS); Lu, Wei [ORNL; Kaiser, Helmut [Center for the Exploration of Energy and Matter, Indiana University

    2014-10-01

    We describe a new concept for a neutron moderating assembly resulting in the more efficient production of slow neutron beams. The Convoluted Moderator, a heterogeneous stack of interleaved moderating material and nearly transparent single-crystal spacers, is a directionally-enhanced neutron beam source, improving beam effectiveness over an angular range comparable to the range accepted by neutron beam lines and guides. We have demonstrated gains of 50% in slow neutron intensity for a given fast neutron production rate while simultaneously reducing the wavelength-dependent emission time dispersion by 25%, both coming from a geometric effect in which the neutron beam lines view a large surface area of moderating material in a relatively small volume. Additionally, we have confirmed a Bragg-enhancement effect arising from coherent scattering within the single-crystal spacers. We have not observed hypothesized refractive effects leading to additional gains at long wavelength. In addition to confirmation of the validity of the Convoluted Moderator concept, our measurements provide a series of benchmark experiments suitable for developing simulation and analysis techniques for practical optimization and eventual implementation at slow neutron source facilities.

  7. Multiple deep convolutional neural networks averaging for face alignment

    Science.gov (United States)

    Zhang, Shaohua; Yang, Hua; Yin, Zhouping

    2015-05-01

    Face alignment is critical for face recognition, and the deep learning-based method shows promise for solving such issues, given that competitive results are achieved on benchmarks with additional benefits, such as dispensing with handcrafted features and initial shape. However, most existing deep learning-based approaches are complicated and quite time-consuming during training. We propose a compact face alignment method for fast training without decreasing its accuracy. Rectified linear unit is employed, which allows all networks approximately five times faster convergence than a tanh neuron. An eight learnable layer deep convolutional neural network (DCNN) based on local response normalization and a padding convolutional layer (PCL) is designed to provide reliable initial values during prediction. A model combination scheme is presented to further reduce errors, while showing that only two network architectures and hyperparameter selection procedures are required in our approach. A three-level cascaded system is ultimately built based on the DCNNs and model combination mode. Extensive experiments validate the effectiveness of our method and demonstrate comparable accuracy with state-of-the-art methods on BioID, labeled face parts in the wild, and Helen datasets.

  8. Classifications of multispectral colorectal cancer tissues using convolution neural network

    Directory of Open Access Journals (Sweden)

    Hawraa Haj-Hassan

    2017-01-01

    Full Text Available Background: Colorectal cancer (CRC is the third most common cancer among men and women. Its diagnosis in early stages, typically done through the analysis of colon biopsy images, can greatly improve the chances of a successful treatment. This paper proposes to use convolution neural networks (CNNs to predict three tissue types related to the progression of CRC: benign hyperplasia (BH, intraepithelial neoplasia (IN, and carcinoma (Ca. Methods: Multispectral biopsy images of thirty CRC patients were retrospectively analyzed. Images of tissue samples were divided into three groups, based on their type (10 BH, 10 IN, and 10 Ca. An active contour model was used to segment image regions containing pathological tissues. Tissue samples were classified using a CNN containing convolution, max-pooling, and fully-connected layers. Available tissue samples were split into a training set, for learning the CNN parameters, and test set, for evaluating its performance. Results: An accuracy of 99.17% was obtained from segmented image regions, outperforming existing approaches based on traditional feature extraction, and classification techniques. Conclusions: Experimental results demonstrate the effectiveness of CNN for the classification of CRC tissue types, in particular when using presegmented regions of interest.

  9. Thermalnet: a Deep Convolutional Network for Synthetic Thermal Image Generation

    Science.gov (United States)

    Kniaz, V. V.; Gorbatsevich, V. S.; Mizginov, V. A.

    2017-05-01

    Deep convolutional neural networks have dramatically changed the landscape of the modern computer vision. Nowadays methods based on deep neural networks show the best performance among image recognition and object detection algorithms. While polishing of network architectures received a lot of scholar attention, from the practical point of view the preparation of a large image dataset for a successful training of a neural network became one of major challenges. This challenge is particularly profound for image recognition in wavelengths lying outside the visible spectrum. For example no infrared or radar image datasets large enough for successful training of a deep neural network are available to date in public domain. Recent advances of deep neural networks prove that they are also capable to do arbitrary image transformations such as super-resolution image generation, grayscale image colorisation and imitation of style of a given artist. Thus a natural question arise: how could be deep neural networks used for augmentation of existing large image datasets? This paper is focused on the development of the Thermalnet deep convolutional neural network for augmentation of existing large visible image datasets with synthetic thermal images. The Thermalnet network architecture is inspired by colorisation deep neural networks.

  10. Video-based face recognition via convolutional neural networks

    Science.gov (United States)

    Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming

    2017-06-01

    Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.

  11. Real-Time Video Convolutional Face Finder on Embedded Platforms

    Directory of Open Access Journals (Sweden)

    Mamalet Franck

    2007-01-01

    Full Text Available A high-level optimization methodology is applied for implementing the well-known convolutional face finder (CFF algorithm for real-time applications on mobile phones, such as teleconferencing, advanced user interfaces, image indexing, and security access control. CFF is based on a feature extraction and classification technique which consists of a pipeline of convolutions and subsampling operations. The design of embedded systems requires a good trade-off between performance and code size due to the limited amount of available resources. The followed methodology copes with the main drawbacks of the original implementation of CFF such as floating-point computation and memory allocation, in order to allow parallelism exploitation and perform algorithm optimizations. Experimental results show that our embedded face detection system can accurately locate faces with less computational load and memory cost. It runs on a 275 MHz Starcore DSP at 35 QCIF images/s with state-of-the-art detection rates and very low false alarm rates.

  12. Deep Convolutional Neural Networks for large-scale speech tasks.

    Science.gov (United States)

    Sainath, Tara N; Kingsbury, Brian; Saon, George; Soltau, Hagen; Mohamed, Abdel-rahman; Dahl, George; Ramabhadran, Bhuvana

    2015-04-01

    Convolutional Neural Networks (CNNs) are an alternative type of neural network that can be used to reduce spectral variations and model spectral correlations which exist in signals. Since speech signals exhibit both of these properties, we hypothesize that CNNs are a more effective model for speech compared to Deep Neural Networks (DNNs). In this paper, we explore applying CNNs to large vocabulary continuous speech recognition (LVCSR) tasks. First, we determine the appropriate architecture to make CNNs effective compared to DNNs for LVCSR tasks. Specifically, we focus on how many convolutional layers are needed, what is an appropriate number of hidden units, what is the best pooling strategy. Second, investigate how to incorporate speaker-adapted features, which cannot directly be modeled by CNNs as they do not obey locality in frequency, into the CNN framework. Third, given the importance of sequence training for speech tasks, we introduce a strategy to use ReLU+dropout during Hessian-free sequence training of CNNs. Experiments on 3 LVCSR tasks indicate that a CNN with the proposed speaker-adapted and ReLU+dropout ideas allow for a 12%-14% relative improvement in WER over a strong DNN system, achieving state-of-the art results in these 3 tasks.

  13. Coronary artery calcification (CAC) classification with deep convolutional neural networks

    Science.gov (United States)

    Liu, Xiuming; Wang, Shice; Deng, Yufeng; Chen, Kuan

    2017-03-01

    Coronary artery calcification (CAC) is a typical marker of the coronary artery disease, which is one of the biggest causes of mortality in the U.S. This study evaluates the feasibility of using a deep convolutional neural network (DCNN) to automatically detect CAC in X-ray images. 1768 posteroanterior (PA) view chest X-Ray images from Sichuan Province Peoples Hospital, China were collected retrospectively. Each image is associated with a corresponding diagnostic report written by a trained radiologist (907 normal, 861 diagnosed with CAC). Onequarter of the images were randomly selected as test samples; the rest were used as training samples. DCNN models consisting of 2,4,6 and 8 convolutional layers were designed using blocks of pre-designed CNN layers. Each block was implemented in Theano with Graphics Processing Units (GPU). Human-in-the-loop learning was also performed on a subset of 165 images with framed arteries by trained physicians. The results from the DCNN models were compared to the diagnostic reports. The average diagnostic accuracies for models with 2,4,6,8 layers were 0.85, 0.87, 0.88, and 0.89 respectively. The areas under the curve (AUC) were 0.92, 0.95, 0.95, and 0.96. As the model grows deeper, the AUC or diagnostic accuracies did not have statistically significant changes. The results of this study indicate that DCNN models have promising potential in the field of intelligent medical image diagnosis practice.

  14. MULTI-VIEW FACE DETECTION BASED ON KERNEL PRINCIPAL COMPONENT ANALYSIS AND KERNEL SUPPORT VECTOR TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Muzhir Shaban Al-Ani

    2011-05-01

    Full Text Available Detecting faces across multiple views is more challenging than in a frontal view. To address this problem,an efficient approach is presented in this paper using a kernel machine based approach for learning suchnonlinear mappings to provide effective view-based representation for multi-view face detection. In thispaper Kernel Principal Component Analysis (KPCA is used to project data into the view-subspaces thencomputed as view-based features. Multi-view face detection is performed by classifying each input imageinto face or non-face class, by using a two class Kernel Support Vector Classifier (KSVC. Experimentalresults demonstrate successful face detection over a wide range of facial variation in color, illuminationconditions, position, scale, orientation, 3D pose, and expression in images from several photo collections.

  15. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    Science.gov (United States)

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  16. Using convolutional decoding to improve time delay and phase estimation in digital communications

    Energy Technology Data Exchange (ETDEWEB)

    Ormesher, Richard C. (Albuquerque, NM); Mason, John J. (Albuquerque, NM)

    2010-01-26

    The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.

  17. An upper bound on the number of errors corrected by a convolutional code

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2000-01-01

    The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length.......The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length....

  18. Embedded Analytical Solutions Improve Accuracy in Convolution-Based Particle Tracking Models using Python

    Science.gov (United States)

    Starn, J. J.

    2013-12-01

    Particle tracking often is used to generate particle-age distributions that are used as impulse-response functions in convolution. A typical application is to produce groundwater solute breakthrough curves (BTC) at endpoint receptors such as pumping wells or streams. The commonly used semi-analytical particle-tracking algorithm based on the assumption of linear velocity gradients between opposing cell faces is computationally very fast when used in combination with finite-difference models. However, large gradients near pumping wells in regional-scale groundwater-flow models often are not well represented because of cell-size limitations. This leads to inaccurate velocity fields, especially at weak sinks. Accurate analytical solutions for velocity near a pumping well are available, and various boundary conditions can be imposed using image-well theory. Python can be used to embed these solutions into existing semi-analytical particle-tracking codes, thereby maintaining the integrity and quality-assurance of the existing code. Python (and associated scientific computational packages NumPy, SciPy, and Matplotlib) is an effective tool because of its wide ranging capability. Python text processing allows complex and database-like manipulation of model input and output files, including binary and HDF5 files. High-level functions in the language include ODE solvers to solve first-order particle-location ODEs, Gaussian kernel density estimation to compute smooth particle-age distributions, and convolution. The highly vectorized nature of NumPy arrays and functions minimizes the need for computationally expensive loops. A modular Python code base has been developed to compute BTCs using embedded analytical solutions at pumping wells based on an existing well-documented finite-difference groundwater-flow simulation code (MODFLOW) and a semi-analytical particle-tracking code (MODPATH). The Python code base is tested by comparing BTCs with highly discretized synthetic steady

  19. Kernels for Vector-Valued Functions: a Review

    CERN Document Server

    Alvarez, Mauricio A; Lawrence, Neil D

    2011-01-01

    Kernel methods are among the most popular techniques in machine learning. From a frequentist/discriminative perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a Bayesian/generative perspective they are the key in the context of Gaussian processes, where the kernel function is also known as the covariance function. Traditionally, kernel methods have been used in supervised learning problem with scalar outputs and indeed there has been a considerable amount of work devoted to designing and learning kernels. More recently there has been an increasing interest in methods that deal with multiple outputs, motivated partly by frameworks like multitask learning. In this paper, we review different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional method...

  20. The effect of whitening transformation on pooling operations in convolutional autoencoders

    Science.gov (United States)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua

    2015-12-01

    Convolutional autoencoders (CAEs) are unsupervised feature extractors for high-resolution images. In the pre-processing step, whitening transformation has widely been adopted to remove redundancy by making adjacent pixels less correlated. Pooling is a biologically inspired operation to reduce the resolution of feature maps and achieve spatial invariance in convolutional neural networks. Conventionally, pooling methods are mainly determined empirically in most previous work. Therefore, our main purpose is to study the relationship between whitening processing and pooling operations in convolutional autoencoders for image classification. We propose an adaptive pooling approach based on the concepts of information entropy to test the effect of whitening on pooling in different conditions. Experimental results on benchmark datasets indicate that the performance of pooling strategies is associated with the distribution of feature activations, which can be affected by whitening processing. This provides guidance for the selection of pooling methods in convolutional autoencoders and other convolutional neural networks.