WorldWideScience

Sample records for sparsity regularization approach

  1. Sparsity regularization for parameter identification problems

    International Nuclear Information System (INIS)

    Jin, Bangti; Maass, Peter

    2012-01-01

    The investigation of regularization schemes with sparsity promoting penalty terms has been one of the dominant topics in the field of inverse problems over the last years, and Tikhonov functionals with ℓ p -penalty terms for 1 ⩽ p ⩽ 2 have been studied extensively. The first investigations focused on regularization properties of the minimizers of such functionals with linear operators and on iteration schemes for approximating the minimizers. These results were quickly transferred to nonlinear operator equations, including nonsmooth operators and more general function space settings. The latest results on regularization properties additionally assume a sparse representation of the true solution as well as generalized source conditions, which yield some surprising and optimal convergence rates. The regularization theory with ℓ p sparsity constraints is relatively complete in this setting; see the first part of this review. In contrast, the development of efficient numerical schemes for approximating minimizers of Tikhonov functionals with sparsity constraints for nonlinear operators is still ongoing. The basic iterated soft shrinkage approach has been extended in several directions and semi-smooth Newton methods are becoming applicable in this field. In particular, the extension to more general non-convex, non-differentiable functionals by variational principles leads to a variety of generalized iteration schemes. We focus on such iteration schemes in the second part of this review. A major part of this survey is devoted to applying sparsity constrained regularization techniques to parameter identification problems for partial differential equations, which we regard as the prototypical setting for nonlinear inverse problems. Parameter identification problems exhibit different levels of complexity and we aim at characterizing a hierarchy of such problems. The operator defining these inverse problems is the parameter-to-state mapping. We first summarize some

  2. A new approach to global seismic tomography based on regularization by sparsity in a novel 3D spherical wavelet basis

    Science.gov (United States)

    Loris, Ignace; Simons, Frederik J.; Daubechies, Ingrid; Nolet, Guust; Fornasier, Massimo; Vetter, Philip; Judd, Stephen; Voronin, Sergey; Vonesch, Cédric; Charléty, Jean

    2010-05-01

    Global seismic wavespeed models are routinely parameterized in terms of spherical harmonics, networks of tetrahedral nodes, rectangular voxels, or spherical splines. Up to now, Earth model parametrizations by wavelets on the three-dimensional ball remain uncommon. Here we propose such a procedure with the following three goals in mind: (1) The multiresolution character of a wavelet basis allows for the models to be represented with an effective spatial resolution that varies as a function of position within the Earth. (2) This property can be used to great advantage in the regularization of seismic inversion schemes by seeking the most sparse solution vector, in wavelet space, through iterative minimization of a combination of the ℓ2 (to fit the data) and ℓ1 norms (to promote sparsity in wavelet space). (3) With the continuing increase in high-quality seismic data, our focus is also on numerical efficiency and the ability to use parallel computing in reconstructing the model. In this presentation we propose a new wavelet basis to take advantage of these three properties. To form the numerical grid we begin with a surface tesselation known as the 'cubed sphere', a construction popular in fluid dynamics and computational seismology, coupled with an semi-regular radial subdivison that honors the major seismic discontinuities between the core-mantle boundary and the surface. This mapping first divides the volume of the mantle into six portions. In each 'chunk' two angular and one radial variable are used for parametrization. In the new variables standard 'cartesian' algorithms can more easily be used to perform the wavelet transform (or other common transforms). Edges between chunks are handled by special boundary filters. We highlight the benefits of this construction and use it to analyze the information present in several published seismic compressional-wavespeed models of the mantle, paying special attention to the statistics of wavelet and scaling coefficients

  3. SparseBeads data: benchmarking sparsity-regularized computed tomography

    Science.gov (United States)

    Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.

    2017-12-01

    Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.

  4. Efficient regularization with wavelet sparsity constraints in photoacoustic tomography

    Science.gov (United States)

    Frikel, Jürgen; Haltmeier, Markus

    2018-02-01

    In this paper, we consider the reconstruction problem of photoacoustic tomography (PAT) with a flat observation surface. We develop a direct reconstruction method that employs regularization with wavelet sparsity constraints. To that end, we derive a wavelet-vaguelette decomposition (WVD) for the PAT forward operator and a corresponding explicit reconstruction formula in the case of exact data. In the case of noisy data, we combine the WVD reconstruction formula with soft-thresholding, which yields a spatially adaptive estimation method. We demonstrate that our method is statistically optimal for white random noise if the unknown function is assumed to lie in any Besov-ball. We present generalizations of this approach and, in particular, we discuss the combination of PAT-vaguelette soft-thresholding with a total variation (TV) prior. We also provide an efficient implementation of the PAT-vaguelette transform that leads to fast image reconstruction algorithms supported by numerical results.

  5. SparseBeads data: benchmarking sparsity-regularized computed tomography

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Sauer; Coban, Sophia B.; Lionheart, William R. B.

    2017-01-01

    -regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels...

  6. Basis Expansion Approaches for Regularized Sequential Dictionary Learning Algorithms With Enforced Sparsity for fMRI Data Analysis.

    Science.gov (United States)

    Seghouane, Abd-Krim; Iqbal, Asif

    2017-09-01

    Sequential dictionary learning algorithms have been successfully applied to functional magnetic resonance imaging (fMRI) data analysis. fMRI data sets are, however, structured data matrices with the notions of temporal smoothness in the column direction. This prior information, which can be converted into a constraint of smoothness on the learned dictionary atoms, has seldomly been included in classical dictionary learning algorithms when applied to fMRI data analysis. In this paper, we tackle this problem by proposing two new sequential dictionary learning algorithms dedicated to fMRI data analysis by accounting for this prior information. These algorithms differ from the existing ones in their dictionary update stage. The steps of this stage are derived as a variant of the power method for computing the SVD. The proposed algorithms generate regularized dictionary atoms via the solution of a left regularized rank-one matrix approximation problem where temporal smoothness is enforced via regularization through basis expansion and sparse basis expansion in the dictionary update stage. Applications on synthetic data experiments and real fMRI data sets illustrating the performance of the proposed algorithms are provided.

  7. Sparsity-regularized HMAX for visual recognition.

    Directory of Open Access Journals (Sweden)

    Xiaolin Hu

    Full Text Available About ten years ago, HMAX was proposed as a simple and biologically feasible model for object recognition, based on how the visual cortex processes information. However, the model does not encompass sparse firing, which is a hallmark of neurons at all stages of the visual pathway. The current paper presents an improved model, called sparse HMAX, which integrates sparse firing. This model is able to learn higher-level features of objects on unlabeled training images. Unlike most other deep learning models that explicitly address global structure of images in every layer, sparse HMAX addresses local to global structure gradually along the hierarchy by applying patch-based learning to the output of the previous layer. As a consequence, the learning method can be standard sparse coding (SSC or independent component analysis (ICA, two techniques deeply rooted in neuroscience. What makes SSC and ICA applicable at higher levels is the introduction of linear higher-order statistical regularities by max pooling. After training, high-level units display sparse, invariant selectivity for particular individuals or for image categories like those observed in human inferior temporal cortex (ITC and medial temporal lobe (MTL. Finally, on an image classification benchmark, sparse HMAX outperforms the original HMAX by a large margin, suggesting its great potential for computer vision.

  8. A reconstruction algorithm for electrical impedance tomography based on sparsity regularization

    KAUST Repository

    Jin, Bangti

    2011-08-24

    This paper develops a novel sparse reconstruction algorithm for the electrical impedance tomography problem of determining a conductivity parameter from boundary measurements. The sparsity of the \\'inhomogeneity\\' with respect to a certain basis is a priori assumed. The proposed approach is motivated by a Tikhonov functional incorporating a sparsity-promoting ℓ 1-penalty term, and it allows us to obtain quantitative results when the assumption is valid. A novel iterative algorithm of soft shrinkage type was proposed. Numerical results for several two-dimensional problems with both single and multiple convex and nonconvex inclusions were presented to illustrate the features of the proposed algorithm and were compared with one conventional approach based on smoothness regularization. © 2011 John Wiley & Sons, Ltd.

  9. Stochastically Estimating Modular Criticality in Large-Scale Logic Circuits Using Sparsity Regularization and Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Mohammed Alawad

    2015-03-01

    Full Text Available This paper considers the problem of how to efficiently measure a large and complex information field with optimally few observations. Specifically, we investigate how to stochastically estimate modular criticality values in a large-scale digital circuit with a very limited number of measurements in order to minimize the total measurement efforts and time. We prove that, through sparsity-promoting transform domain regularization and by strategically integrating compressive sensing with Bayesian learning, more than 98% of the overall measurement accuracy can be achieved with fewer than 10% of measurements as required in a conventional approach that uses exhaustive measurements. Furthermore, we illustrate that the obtained criticality results can be utilized to selectively fortify large-scale digital circuits for operation with narrow voltage headrooms and in the presence of soft-errors rising at near threshold voltage levels, without excessive hardware overheads. Our numerical simulation results have shown that, by optimally allocating only 10% circuit redundancy, for some large-scale benchmark circuits, we can achieve more than a three-times reduction in its overall error probability, whereas if randomly distributing such 10% hardware resource, less than 2% improvements in the target circuit’s overall robustness will be observed. Finally, we conjecture that our proposed approach can be readily applied to estimate other essential properties of digital circuits that are critical to designing and analyzing them, such as the observability measure in reliability analysis and the path delay estimation in stochastic timing analysis. The only key requirement of our proposed methodology is that these global information fields exhibit a certain degree of smoothness, which is universally true for almost any physical phenomenon.

  10. On the interplay of basis smoothness and specific range conditions occurring in sparsity regularization

    International Nuclear Information System (INIS)

    Anzengruber, Stephan W; Hofmann, Bernd; Ramlau, Ronny

    2013-01-01

    The convergence rates results in ℓ 1 -regularization when the sparsity assumption is narrowly missed, presented by Burger et al (2013 Inverse Problems 29 025013), are based on a crucial condition which requires that all basis elements belong to the range of the adjoint of the forward operator. Partly it was conjectured that such a condition is very restrictive. In this context, we study sparsity-promoting varieties of Tikhonov regularization for linear ill-posed problems with respect to an orthonormal basis in a separable Hilbert space using ℓ 1 and sublinear penalty terms. In particular, we show that the corresponding range condition is always satisfied for all basis elements if the problems are well-posed in a certain weaker topology and the basis elements are chosen appropriately related to an associated Gelfand triple. The Radon transform, Symm’s integral equation and linear integral operators of Volterra type are examples for such behaviour, which allows us to apply convergence rates results for non-sparse solutions, and we further extend these results also to the case of non-convex ℓ q -regularization with 0 < q < 1. (paper)

  11. Limited-angle multi-energy CT using joint clustering prior and sparsity regularization

    Science.gov (United States)

    Zhang, Huayu; Xing, Yuxiang

    2016-03-01

    In this article, we present an easy-to-implement Multi-energy CT scanning strategy and a corresponding reconstruction method, which facilitate spectral CT imaging by improving the data efficiency the number-of-energy- channel fold without introducing visible limited-angle artifacts caused by reducing projection views. Leveraging the structure coherence at different energies, we first pre-reconstruct a prior structure information image using projection data from all energy channels. Then, we perform a k-means clustering on the prior image to generate a sparse dictionary representation for the image, which severs as a structure information constraint. We com- bine this constraint with conventional compressed sensing method and proposed a new model which we referred as Joint Clustering Prior and Sparsity Regularization (CPSR). CPSR is a convex problem and we solve it by Alternating Direction Method of Multipliers (ADMM). We verify our CPSR reconstruction method with a numerical simulation experiment. A dental phantom with complicate structures of teeth and soft tissues is used. X-ray beams from three spectra of different peak energies (120kVp, 90kVp, 60kVp) irradiate the phantom to form tri-energy projections. Projection data covering only 75◦ from each energy spectrum are collected for reconstruction. Independent reconstruction for each energy will cause severe limited-angle artifacts even with the help of compressed sensing approaches. Our CPSR provides us with images free of the limited-angle artifact. All edge details are well preserved in our experimental study.

  12. A sparsity-regularized Born iterative method for reconstruction of two-dimensional piecewise continuous inhomogeneous domains

    KAUST Repository

    Sandhu, Ali Imran; Desmal, Abdulla; Bagci, Hakan

    2016-01-01

    A sparsity-regularized Born iterative method (BIM) is proposed for efficiently reconstructing two-dimensional piecewise-continuous inhomogeneous dielectric profiles. Such profiles are typically not spatially sparse, which reduces the efficiency of the sparsity-promoting regularization. To overcome this problem, scattered fields are represented in terms of the spatial derivative of the dielectric profile and reconstruction is carried out over samples of the dielectric profile's derivative. Then, like the conventional BIM, the nonlinear problem is iteratively converted into a sequence of linear problems (in derivative samples) and sparsity constraint is enforced on each linear problem using the thresholded Landweber iterations. Numerical results, which demonstrate the efficiency and accuracy of the proposed method in reconstructing piecewise-continuous dielectric profiles, are presented.

  13. A sparsity-regularized Born iterative method for reconstruction of two-dimensional piecewise continuous inhomogeneous domains

    KAUST Repository

    Sandhu, Ali Imran

    2016-04-10

    A sparsity-regularized Born iterative method (BIM) is proposed for efficiently reconstructing two-dimensional piecewise-continuous inhomogeneous dielectric profiles. Such profiles are typically not spatially sparse, which reduces the efficiency of the sparsity-promoting regularization. To overcome this problem, scattered fields are represented in terms of the spatial derivative of the dielectric profile and reconstruction is carried out over samples of the dielectric profile\\'s derivative. Then, like the conventional BIM, the nonlinear problem is iteratively converted into a sequence of linear problems (in derivative samples) and sparsity constraint is enforced on each linear problem using the thresholded Landweber iterations. Numerical results, which demonstrate the efficiency and accuracy of the proposed method in reconstructing piecewise-continuous dielectric profiles, are presented.

  14. Block matching sparsity regularization-based image reconstruction for incomplete projection data in computed tomography

    Science.gov (United States)

    Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin

    2018-02-01

    In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.

  15. How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Sauer; Sidky, E. Y.

    2015-01-01

    We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study...... and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers...... measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means...

  16. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    International Nuclear Information System (INIS)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A.; Yang, Deshan; Tan, Jun

    2016-01-01

    accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.

  17. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    Science.gov (United States)

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582

  18. How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography.

    Science.gov (United States)

    Jørgensen, J S; Sidky, E Y

    2015-06-13

    We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization.

  19. Travel time tomography with local image regularization by sparsity constrained dictionary learning

    Science.gov (United States)

    Bianco, M.; Gerstoft, P.

    2017-12-01

    We propose a regularization approach for 2D seismic travel time tomography which models small rectangular groups of slowness pixels, within an overall or `global' slowness image, as sparse linear combinations of atoms from a dictionary. The groups of slowness pixels are referred to as patches and a dictionary corresponds to a collection of functions or `atoms' describing the slowness in each patch. These functions could for example be wavelets.The patch regularization is incorporated into the global slowness image. The global image models the broad features, while the local patch images incorporate prior information from the dictionary. Further, high resolution slowness within patches is permitted if the travel times from the global estimates support it. The proposed approach is formulated as an algorithm, which is repeated until convergence is achieved: 1) From travel times, find the global slowness image with a minimum energy constraint on the pixel variance relative to a reference. 2) Find the patch level solutions to fit the global estimate as a sparse linear combination of dictionary atoms.3) Update the reference as the weighted average of the patch level solutions.This approach relies on the redundancy of the patches in the seismic image. Redundancy means that the patches are repetitions of a finite number of patterns, which are described by the dictionary atoms. Redundancy in the earth's structure was demonstrated in previous works in seismics where dictionaries of wavelet functions regularized inversion. We further exploit redundancy of the patches by using dictionary learning algorithms, a form of unsupervised machine learning, to estimate optimal dictionaries from the data in parallel with the inversion. We demonstrate our approach on densely, but irregularly sampled synthetic seismic images.

  20. Seismic data two-step recovery approach combining sparsity-promoting and hyperbolic Radon transform methods

    International Nuclear Information System (INIS)

    Wang, Hanchuang; Chen, Shengchang; Ren, Haoran; Liang, Donghui; Zhou, Huamin; She, Deping

    2015-01-01

    In current research of seismic data recovery problems, the sparsity-promoting method usually produces an insufficient recovery result at the locations of null traces. The HRT (hyperbolic Radon transform) method can be applied to problems of seismic data recovery with approximately hyperbolic events. Influenced by deviations of hyperbolic characteristics between real and ideal travel-time curves, some spurious events are usually introduced and the recovery effect of intermediate and far-offset traces is worse than that of near-offset traces. Sparsity-promoting recovery is primarily dependent on the sparsity of seismic data in the sparse transform domain (i.e. on the local waveform characteristics), whereas HRT recovery is severely affected by the global characteristics of the seismic events. Inspired by the above conclusion, a two-step recovery approach combining sparsity-promoting and time-invariant HRT methods is proposed, which is based on both local and global characteristics of the seismic data. Two implementation strategies are presented in detail, and the selection criteria of the relevant strategies is also discussed. Numerical examples of synthetic and real data verify that the new approach can achieve a better recovery effect by simultaneously overcoming the shortcomings of sparsity-promoting recovery and HRT recovery. (paper)

  1. Wavelet-sparsity based regularization over time in the inverse problem of electrocardiography.

    Science.gov (United States)

    Cluitmans, Matthijs J M; Karel, Joël M H; Bonizzi, Pietro; Volders, Paul G A; Westra, Ronald L; Peeters, Ralf L M

    2013-01-01

    Noninvasive, detailed assessment of electrical cardiac activity at the level of the heart surface has the potential to revolutionize diagnostics and therapy of cardiac pathologies. Due to the requirement of noninvasiveness, body-surface potentials are measured and have to be projected back to the heart surface, yielding an ill-posed inverse problem. Ill-posedness ensures that there are non-unique solutions to this problem, resulting in a problem of choice. In the current paper, it is proposed to restrict this choice by requiring that the time series of reconstructed heart-surface potentials is sparse in the wavelet domain. A local search technique is introduced that pursues a sparse solution, using an orthogonal wavelet transform. Epicardial potentials reconstructed from this method are compared to those from existing methods, and validated with actual intracardiac recordings. The new technique improves the reconstructions in terms of smoothness and recovers physiologically meaningful details. Additionally, reconstruction of activation timing seems to be improved when pursuing sparsity of the reconstructed signals in the wavelet domain.

  2. Sparsity-based shrinkage approach for practicability improvement of H-LBP-based edge extraction

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Chenyi [School of Physics, Northeast Normal University, Changchun 130024 (China); Qiao, Shuang, E-mail: qiaos810@nenu.edu.cn [School of Physics, Northeast Normal University, Changchun 130024 (China); Sun, Jianing, E-mail: sunjn118@nenu.edu.cn [School of Mathematics and Statistics, Northeast Normal University, Changchun 130024 (China); Zhao, Ruikun; Wu, Wei [Jilin Cancer Hospital, Changchun 130021 (China)

    2016-07-21

    The local binary pattern with H function (H-LBP) technique enables fast and efficient edge extraction in digital radiography. In this paper, we reformulate the model of H-LBP and propose a novel sparsity-based shrinkage approach, in which the threshold can be adapted to the data sparsity. Using this model, we upgrade fast H-LBP framework and apply it to real digital radiography. The experiments show that the method improved using the new shrinkage approach can avoid elaborately artificial modulation of parameters and possess greater robustness in edge extraction compared with the other current methods without increasing processing time. - Highlights: • An novel sparsity-based shrinkage approach for edge extraction on digital radiography is proposed. • The threshold of SS-LBP can be adaptive to the data sparsity. • SS-LBP is the development of AH-LBP and H-LBP. • Without boosting processing time and losing processing efficiency, SS-LBP can avoid elaborately artificial modulation of parameters provides. • SS-LBP has more robust performance in edge extraction compared with the existing methods.

  3. A Psychoacoustic-Based Multiple Audio Object Coding Approach via Intra-Object Sparsity

    Directory of Open Access Journals (Sweden)

    Maoshen Jia

    2017-12-01

    Full Text Available Rendering spatial sound scenes via audio objects has become popular in recent years, since it can provide more flexibility for different auditory scenarios, such as 3D movies, spatial audio communication and virtual classrooms. To facilitate high-quality bitrate-efficient distribution for spatial audio objects, an encoding scheme based on intra-object sparsity (approximate k-sparsity of the audio object itself is proposed in this paper. The statistical analysis is presented to validate the notion that the audio object has a stronger sparseness in the Modified Discrete Cosine Transform (MDCT domain than in the Short Time Fourier Transform (STFT domain. By exploiting intra-object sparsity in the MDCT domain, multiple simultaneously occurring audio objects are compressed into a mono downmix signal with side information. To ensure a balanced perception quality of audio objects, a Psychoacoustic-based time-frequency instants sorting algorithm and an energy equalized Number of Preserved Time-Frequency Bins (NPTF allocation strategy are proposed, which are employed in the underlying compression framework. The downmix signal can be further encoded via Scalar Quantized Vector Huffman Coding (SQVH technique at a desirable bitrate, and the side information is transmitted in a lossless manner. Both objective and subjective evaluations show that the proposed encoding scheme outperforms the Sparsity Analysis (SPA approach and Spatial Audio Object Coding (SAOC in cases where eight objects were jointly encoded.

  4. Sparsity reconstruction in electrical impedance tomography: An experimental evaluation

    KAUST Repository

    Gehre, Matthias; Kluth, Tobias; Lipponen, Antti; Jin, Bangti; Seppä nen, Aku; Kaipio, Jari P.; Maass, Peter

    2012-01-01

    smoothness regularization approach. The results verify that the adoption of ℓ1-type constraints can enhance the quality of EIT reconstructions: in most of the test cases the reconstructions with sparsity constraints are both qualitatively and quantitatively

  5. A reconstruction algorithm for electrical impedance tomography based on sparsity regularization

    KAUST Repository

    Jin, Bangti; Khan, Taufiquar; Maass, Peter

    2011-01-01

    type was proposed. Numerical results for several two-dimensional problems with both single and multiple convex and nonconvex inclusions were presented to illustrate the features of the proposed algorithm and were compared with one conventional approach

  6. SU-E-I-93: Improved Imaging Quality for Multislice Helical CT Via Sparsity Regularized Iterative Image Reconstruction Method Based On Tensor Framelet

    International Nuclear Information System (INIS)

    Nam, H; Guo, M; Lee, K; Li, R; Xing, L; Gao, H

    2014-01-01

    Purpose: Inspired by compressive sensing, sparsity regularized iterative reconstruction method has been extensively studied. However, its utility pertinent to multislice helical 4D CT for radiotherapy with respect to imaging quality, dose, and time has not been thoroughly addressed. As the beginning of such an investigation, this work carries out the initial comparison of reconstructed imaging quality between sparsity regularized iterative method and analytic method through static phantom studies using a state-of-art 128-channel multi-slice Siemens helical CT scanner. Methods: In our iterative method, tensor framelet (TF) is chosen as the regularization method for its superior performance from total variation regularization in terms of reduced piecewise-constant artifacts and improved imaging quality that has been demonstrated in our prior work. On the other hand, X-ray transforms and its adjoints are computed on-the-fly through GPU implementation using our previous developed fast parallel algorithms with O(1) complexity per computing thread. For comparison, both FDK (approximate analytic method) and Katsevich algorithm (exact analytic method) are used for multislice helical CT image reconstruction. Results: The phantom experimental data with different imaging doses were acquired using a state-of-art 128-channel multi-slice Siemens helical CT scanner. The reconstructed image quality was compared between TF-based iterative method, FDK and Katsevich algorithm with the quantitative analysis for characterizing signal-to-noise ratio, image contrast, and spatial resolution of high-contrast and low-contrast objects. Conclusion: The experimental results suggest that our tensor framelet regularized iterative reconstruction algorithm improves the helical CT imaging quality from FDK and Katsevich algorithm for static experimental phantom studies that have been performed

  7. Efficient operator splitting algorithm for joint sparsity-regularized SPIRiT-based parallel MR imaging reconstruction.

    Science.gov (United States)

    Duan, Jizhong; Liu, Yu; Jing, Peiguang

    2018-02-01

    Self-consistent parallel imaging (SPIRiT) is an auto-calibrating model for the reconstruction of parallel magnetic resonance imaging, which can be formulated as a regularized SPIRiT problem. The Projection Over Convex Sets (POCS) method was used to solve the formulated regularized SPIRiT problem. However, the quality of the reconstructed image still needs to be improved. Though methods such as NonLinear Conjugate Gradients (NLCG) can achieve higher spatial resolution, these methods always demand very complex computation and converge slowly. In this paper, we propose a new algorithm to solve the formulated Cartesian SPIRiT problem with the JTV and JL1 regularization terms. The proposed algorithm uses the operator splitting (OS) technique to decompose the problem into a gradient problem and a denoising problem with two regularization terms, which is solved by our proposed split Bregman based denoising algorithm, and adopts the Barzilai and Borwein method to update step size. Simulation experiments on two in vivo data sets demonstrate that the proposed algorithm is 1.3 times faster than ADMM for datasets with 8 channels. Especially, our proposal is 2 times faster than ADMM for the dataset with 32 channels. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Sparsity reconstruction in electrical impedance tomography: An experimental evaluation

    KAUST Repository

    Gehre, Matthias

    2012-02-01

    We investigate the potential of sparsity constraints in the electrical impedance tomography (EIT) inverse problem of inferring the distributed conductivity based on boundary potential measurements. In sparsity reconstruction, inhomogeneities of the conductivity are a priori assumed to be sparse with respect to a certain basis. This prior information is incorporated into a Tikhonov-type functional by including a sparsity-promoting ℓ1-penalty term. The functional is minimized with an iterative soft shrinkage-type algorithm. In this paper, the feasibility of the sparsity reconstruction approach is evaluated by experimental data from water tank measurements. The reconstructions are computed both with sparsity constraints and with a more conventional smoothness regularization approach. The results verify that the adoption of ℓ1-type constraints can enhance the quality of EIT reconstructions: in most of the test cases the reconstructions with sparsity constraints are both qualitatively and quantitatively more feasible than that with the smoothness constraint. © 2011 Elsevier B.V. All rights reserved.

  9. Dynamic SPECT reconstruction from few projections: a sparsity enforced matrix factorization approach

    Science.gov (United States)

    Ding, Qiaoqiao; Zan, Yunlong; Huang, Qiu; Zhang, Xiaoqun

    2015-02-01

    The reconstruction of dynamic images from few projection data is a challenging problem, especially when noise is present and when the dynamic images are vary fast. In this paper, we propose a variational model, sparsity enforced matrix factorization (SEMF), based on low rank matrix factorization of unknown images and enforced sparsity constraints for representing both coefficients and bases. The proposed model is solved via an alternating iterative scheme for which each subproblem is convex and involves the efficient alternating direction method of multipliers (ADMM). The convergence of the overall alternating scheme for the nonconvex problem relies upon the Kurdyka-Łojasiewicz property, recently studied by Attouch et al (2010 Math. Oper. Res. 35 438) and Attouch et al (2013 Math. Program. 137 91). Finally our proof-of-concept simulation on 2D dynamic images shows the advantage of the proposed method compared to conventional methods.

  10. Sparsity-based fast CGH generation using layer-based approach for 3D point cloud model

    Science.gov (United States)

    Kim, Hak Gu; Jeong, Hyunwook; Ro, Yong Man

    2017-03-01

    Computer generated hologram (CGH) is becoming increasingly important for a 3-D display in various applications including virtual reality. In the CGH, holographic fringe patterns are generated by numerically calculating them on computer simulation systems. However, a heavy computational cost is required to calculate the complex amplitude on CGH plane for all points of 3D objects. This paper proposes a new fast CGH generation based on the sparsity of CGH for 3D point cloud model. The aim of the proposed method is to significantly reduce computational complexity while maintaining the quality of the holographic fringe patterns. To that end, we present a new layer-based approach for calculating the complex amplitude distribution on the CGH plane by using sparse FFT (sFFT). We observe the CGH of a layer of 3D objects is sparse so that dominant CGH is rapidly generated from a small set of signals by sFFT. Experimental results have shown that the proposed method is one order of magnitude faster than recently reported fast CGH generation.

  11. Distributed Unmixing of Hyperspectral Datawith Sparsity Constraint

    Science.gov (United States)

    Khoshsokhan, S.; Rajabi, R.; Zayyani, H.

    2017-09-01

    Spectral unmixing (SU) is a data processing problem in hyperspectral remote sensing. The significant challenge in the SU problem is how to identify endmembers and their weights, accurately. For estimation of signature and fractional abundance matrices in a blind problem, nonnegative matrix factorization (NMF) and its developments are used widely in the SU problem. One of the constraints which was added to NMF is sparsity constraint that was regularized by L1/2 norm. In this paper, a new algorithm based on distributed optimization has been used for spectral unmixing. In the proposed algorithm, a network including single-node clusters has been employed. Each pixel in hyperspectral images considered as a node in this network. The distributed unmixing with sparsity constraint has been optimized with diffusion LMS strategy, and then the update equations for fractional abundance and signature matrices are obtained. Simulation results based on defined performance metrics, illustrate advantage of the proposed algorithm in spectral unmixing of hyperspectral data compared with other methods. The results show that the AAD and SAD of the proposed approach are improved respectively about 6 and 27 percent toward distributed unmixing in SNR=25dB.

  12. Incremental projection approach of regularization for inverse problems

    Energy Technology Data Exchange (ETDEWEB)

    Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)

    2016-10-15

    This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.

  13. A new approach to nonlinear constrained Tikhonov regularization

    KAUST Repository

    Ito, Kazufumi; Jin, Bangti

    2011-01-01

    operator. The approach is exploited to derive convergence rate results for a priori as well as a posteriori choice rules, e.g., discrepancy principle and balancing principle, for selecting the regularization parameter. The idea is further illustrated on a

  14. Probabilistic models for structured sparsity

    DEFF Research Database (Denmark)

    Andersen, Michael Riis

    sparse solutions to linear inverse problems. In this part, the sparsity promoting prior known as the spike-and-slab prior (Mitchell and Beauchamp, 1988) is generalized to the structured sparsity setting. An expectation propagation algorithm is derived for approximate posterior inference. The proposed...

  15. A new approach to nonlinear constrained Tikhonov regularization

    KAUST Repository

    Ito, Kazufumi

    2011-09-16

    We present a novel approach to nonlinear constrained Tikhonov regularization from the viewpoint of optimization theory. A second-order sufficient optimality condition is suggested as a nonlinearity condition to handle the nonlinearity of the forward operator. The approach is exploited to derive convergence rate results for a priori as well as a posteriori choice rules, e.g., discrepancy principle and balancing principle, for selecting the regularization parameter. The idea is further illustrated on a general class of parameter identification problems, for which (new) source and nonlinearity conditions are derived and the structural property of the nonlinearity term is revealed. A number of examples including identifying distributed parameters in elliptic differential equations are presented. © 2011 IOP Publishing Ltd.

  16. MRI reconstruction with joint global regularization and transform learning.

    Science.gov (United States)

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Shakeout: A New Approach to Regularized Deep Neural Network Training.

    Science.gov (United States)

    Kang, Guoliang; Li, Jun; Tao, Dacheng

    2018-05-01

    Recent years have witnessed the success of deep neural networks in dealing with a plenty of practical problems. Dropout has played an essential role in many successful deep neural networks, by inducing regularization in the model training. In this paper, we present a new regularized training approach: Shakeout. Instead of randomly discarding units as Dropout does at the training stage, Shakeout randomly chooses to enhance or reverse each unit's contribution to the next layer. This minor modification of Dropout has the statistical trait: the regularizer induced by Shakeout adaptively combines , and regularization terms. Our classification experiments with representative deep architectures on image datasets MNIST, CIFAR-10 and ImageNet show that Shakeout deals with over-fitting effectively and outperforms Dropout. We empirically demonstrate that Shakeout leads to sparser weights under both unsupervised and supervised settings. Shakeout also leads to the grouping effect of the input units in a layer. Considering the weights in reflecting the importance of connections, Shakeout is superior to Dropout, which is valuable for the deep model compression. Moreover, we demonstrate that Shakeout can effectively reduce the instability of the training process of the deep architecture.

  18. Controlled wavelet domain sparsity for x-ray tomography

    Science.gov (United States)

    Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli

    2018-01-01

    Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \

  19. Robust regularized least-squares beamforming approach to signal estimation

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2017-05-12

    In this paper, we address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold. Firstly, the process requires the inversion of the usually ill-conditioned covariance matrix of the received signals. Secondly, the steering vector pertaining to the direction of arrival of the signal of interest is not known precisely. To tackle these two challenges, the standard capon beamformer is manipulated to a form where the beamformer output is obtained as a scaled version of the inner product of two vectors. The two vectors are linearly related to the steering vector and the received signal snapshot, respectively. The linear operator, in both cases, is the square root of the covariance matrix. A regularized least-squares (RLS) approach is proposed to estimate these two vectors and to provide robustness without exploiting prior information. Simulation results show that the RLS beamformer using the proposed regularization algorithm outperforms state-of-the-art beamforming algorithms, as well as another RLS beamformers using a standard regularization approaches.

  20. Phase reconstruction from velocity-encoded MRI measurements – A survey of sparsity-promoting variational approaches

    KAUST Repository

    Benning, Martin

    2014-01-01

    In recent years there has been significant developments in the reconstruction of magnetic resonance velocity images from sub-sampled k-space data. While showing a strong improvement in reconstruction quality compared to classical approaches, the vast number of different methods, and the challenges in setting them up, often leaves the user with the difficult task of choosing the correct approach, or more importantly, not selecting a poor approach. In this paper, we survey variational approaches for the reconstruction of phase-encoded magnetic resonance velocity images from sub-sampled k-space data. We are particularly interested in regularisers that correctly treat both smooth and geometric features of the image. These features are common to velocity imaging, where the flow field will be smooth but interfaces between the fluid and surrounding material will be sharp, but are challenging to represent sparsely. As an example we demonstrate the variational approaches on velocity imaging of water flowing through a packed bed of solid particles. We evaluate Wavelet regularisation against Total Variation and the relatively recent second order Total Generalised Variation regularisation. We combine these regularisation schemes with a contrast enhancement approach called Bregman iteration. We verify for a variety of sampling patterns that Morozov\\'s discrepancy principle provides a good criterion for stopping the iterations. Therefore, given only the noise level, we present a robust guideline for setting up a variational reconstruction scheme for MR velocity imaging. © 2013 Elsevier Inc. All rights reserved.

  1. Image deblurring using a perturbation-basec regularization approach

    KAUST Repository

    Alanazi, Abdulrahman

    2017-11-02

    The image restoration problem deals with images in which information has been degraded by blur or noise. In this work, we present a new method for image deblurring by solving a regularized linear least-squares problem. In the proposed method, a synthetic perturbation matrix with a bounded norm is forced into the discrete ill-conditioned model matrix. This perturbation is added to enhance the singular-value structure of the matrix and hence to provide an improved solution. A method is proposed to find a near-optimal value of the regularization parameter for the proposed approach. To reduce the computational complexity, we present a technique based on the bootstrapping method to estimate the regularization parameter for both low and high-resolution images. Experimental results on the image deblurring problem are presented. Comparisons are made with three benchmark methods and the results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and SSIM values.

  2. Image deblurring using a perturbation-basec regularization approach

    KAUST Repository

    Alanazi, Abdulrahman; Ballal, Tarig; Masood, Mudassir; Al-Naffouri, Tareq Y.

    2017-01-01

    The image restoration problem deals with images in which information has been degraded by blur or noise. In this work, we present a new method for image deblurring by solving a regularized linear least-squares problem. In the proposed method, a synthetic perturbation matrix with a bounded norm is forced into the discrete ill-conditioned model matrix. This perturbation is added to enhance the singular-value structure of the matrix and hence to provide an improved solution. A method is proposed to find a near-optimal value of the regularization parameter for the proposed approach. To reduce the computational complexity, we present a technique based on the bootstrapping method to estimate the regularization parameter for both low and high-resolution images. Experimental results on the image deblurring problem are presented. Comparisons are made with three benchmark methods and the results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and SSIM values.

  3. A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing

    Science.gov (United States)

    Cobos, Maximo; Lopez, JoseJ; Spors, Sascha

    2010-12-01

    Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.

  4. Accreting fluids onto regular black holes via Hamiltonian approach

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); University of Central Punjab, CAMS, UCP Business School, Lahore (Pakistan)

    2017-08-15

    We investigate the accretion of test fluids onto regular black holes such as Kehagias-Sfetsos black holes and regular black holes with Dagum distribution function. We analyze the accretion process when different test fluids are falling onto these regular black holes. The accreting fluid is being classified through the equation of state according to the features of regular black holes. The behavior of fluid flow and the existence of sonic points is being checked for these regular black holes. It is noted that the three-velocity depends on critical points and the equation of state parameter on phase space. (orig.)

  5. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.

    2016-01-01

    random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various

  6. Resolution enhancement of lung 4D-CT via group-sparsity

    International Nuclear Information System (INIS)

    Bhavsar, Arnav; Wu, Guorong; Shen, Dinggang; Lian, Jun

    2013-01-01

    Purpose: 4D-CT typically delivers more accurate information about anatomical structures in the lung, over 3D-CT, due to its ability to capture visual information of the lung motion across different respiratory phases. This helps to better determine the dose during radiation therapy for lung cancer. However, a critical concern with 4D-CT that substantially compromises this advantage is the low superior-inferior resolution due to less number of acquired slices, in order to control the CT radiation dose. To address this limitation, the authors propose an approach to reconstruct missing intermediate slices, so as to improve the superior-inferior resolution.Methods: In this method the authors exploit the observation that sampling information across respiratory phases in 4D-CT can be complimentary due to lung motion. The authors’ approach uses this locally complimentary information across phases in a patch-based sparse-representation framework. Moreover, unlike some recent approaches that treat local patches independently, the authors’ approach employs the group-sparsity framework that imposes neighborhood and similarity constraints between patches. This helps in mitigating the trade-off between noise robustness and structure preservation, which is an important consideration in resolution enhancement. The authors discuss the regularizing ability of group-sparsity, which helps in reducing the effect of noise and enables better structural localization and enhancement.Results: The authors perform extensive experiments on the publicly available DIR-Lab Lung 4D-CT dataset [R. Castillo, E. Castillo, R. Guerra, V. Johnson, T. McPhail, A. Garg, and T. Guerrero, “A framework for evaluation of deformable image registration spatial accuracy using large landmark point sets,” Phys. Med. Biol. 54, 1849–1870 (2009)]. First, the authors carry out empirical parametric analysis of some important parameters in their approach. The authors then demonstrate, qualitatively as well as

  7. A dynamical system approach to texel identification in regular textures

    NARCIS (Netherlands)

    Grigorescu, S.E.; Petkov, N.; Loncaric, S; Neri, A; Babic, H

    2003-01-01

    We propose a texture analysis method based on Rényi’s entropies. The method aims at identifying texels in regular textures by searching for the smallest window through which the minimum number of different visual patterns is observed when moving the window over a given texture. The experimental

  8. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  9. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  10. Supplementary Appendix for: Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Alnaffouri, Tareq Y.

    2016-01-01

    In this supplementary appendix we provide proofs and additional simulation results that complement the paper (constrained perturbation regularization approach for signal estimation using random matrix theory).

  11. Local sparsity enhanced compressed sensing magnetic resonance imaging in uniform discrete curvelet domain

    International Nuclear Information System (INIS)

    Yang, Bingxin; Yuan, Min; Ma, Yide; Zhang, Jiuwen; Zhan, Kun

    2015-01-01

    Compressed sensing(CS) has been well applied to speed up imaging by exploring image sparsity over predefined basis functions or learnt dictionary. Firstly, the sparse representation is generally obtained in a single transform domain by using wavelet-like methods, which cannot produce optimal sparsity considering sparsity, data adaptivity and computational complexity. Secondly, most state-of-the-art reconstruction models seldom consider composite regularization upon the various structural features of images and transform coefficients sub-bands. Therefore, these two points lead to high sampling rates for reconstructing high-quality images. In this paper, an efficient composite sparsity structure is proposed. It learns adaptive dictionary from lowpass uniform discrete curvelet transform sub-band coefficients patches. Consistent with the sparsity structure, a novel composite regularization reconstruction model is developed to improve reconstruction results from highly undersampled k-space data. It is established via minimizing spatial image and lowpass sub-band coefficients total variation regularization, transform sub-bands coefficients l 1 sparse regularization and constraining k-space measurements fidelity. A new augmented Lagrangian method is then introduced to optimize the reconstruction model. It updates representation coefficients of lowpass sub-band coefficients over dictionary, transform sub-bands coefficients and k-space measurements upon the ideas of constrained split augmented Lagrangian shrinkage algorithm. Experimental results on in vivo data show that the proposed method obtains high-quality reconstructed images. The reconstructed images exhibit the least aliasing artifacts and reconstruction error among current CS MRI methods. The proposed sparsity structure can fit and provide hierarchical sparsity for magnetic resonance images simultaneously, bridging the gap between predefined sparse representation methods and explicit dictionary. The new augmented

  12. Learning Sparse Visual Representations with Leaky Capped Norm Regularizers

    OpenAIRE

    Wangni, Jianqiao; Lin, Dahua

    2017-01-01

    Sparsity inducing regularization is an important part for learning over-complete visual representations. Despite the popularity of $\\ell_1$ regularization, in this paper, we investigate the usage of non-convex regularizations in this problem. Our contribution consists of three parts. First, we propose the leaky capped norm regularization (LCNR), which allows model weights below a certain threshold to be regularized more strongly as opposed to those above, therefore imposes strong sparsity and...

  13. A regularized approach for geodesic-based semisupervised multimanifold learning.

    Science.gov (United States)

    Fan, Mingyu; Zhang, Xiaoqin; Lin, Zhouchen; Zhang, Zhongfei; Bao, Hujun

    2014-05-01

    Geodesic distance, as an essential measurement for data dissimilarity, has been successfully used in manifold learning. However, most geodesic distance-based manifold learning algorithms have two limitations when applied to classification: 1) class information is rarely used in computing the geodesic distances between data points on manifolds and 2) little attention has been paid to building an explicit dimension reduction mapping for extracting the discriminative information hidden in the geodesic distances. In this paper, we regard geodesic distance as a kind of kernel, which maps data from linearly inseparable space to linear separable distance space. In doing this, a new semisupervised manifold learning algorithm, namely regularized geodesic feature learning algorithm, is proposed. The method consists of three techniques: a semisupervised graph construction method, replacement of original data points with feature vectors which are built by geodesic distances, and a new semisupervised dimension reduction method for feature vectors. Experiments on the MNIST, USPS handwritten digit data sets, MIT CBCL face versus nonface data set, and an intelligent traffic data set show the effectiveness of the proposed algorithm.

  14. A soft double regularization approach to parametric blind image deconvolution.

    Science.gov (United States)

    Chen, Li; Yap, Kim-Hui

    2005-05-01

    This paper proposes a blind image deconvolution scheme based on soft integration of parametric blur structures. Conventional blind image deconvolution methods encounter a difficult dilemma of either imposing stringent and inflexible preconditions on the problem formulation or experiencing poor restoration results due to lack of information. This paper attempts to address this issue by assessing the relevance of parametric blur information, and incorporating the knowledge into the parametric double regularization (PDR) scheme. The PDR method assumes that the actual blur satisfies up to a certain degree of parametric structure, as there are many well-known parametric blurs in practical applications. Further, it can be tailored flexibly to include other blur types if some prior parametric knowledge of the blur is available. A manifold soft parametric modeling technique is proposed to generate the blur manifolds, and estimate the fuzzy blur structure. The PDR scheme involves the development of the meaningful cost function, the estimation of blur support and structure, and the optimization of the cost function. Experimental results show that it is effective in restoring degraded images under different environments.

  15. Detecting regularities in soccer dynamics: A T-pattern approach

    Directory of Open Access Journals (Sweden)

    Valentino Zurloni

    2014-01-01

    Full Text Available La dinámica del juego en partidos de fútbol profesional es un fenómeno complejo que no ha estado resuelto de forma óptima a través delas vías tradicionales que han pretendido la cuantificación en deportes de equipo. El objetivo de este estudio es el de detectar la dinámica existente mediante un análisis de patrones temporales. Específicamente, se pretenden revelar las estructuras ocultas pero estables que subyacen a las situaciones interactivas que determinan las acciones de ataque en el fútbol. El planteamiento metodológico se basa en un diseño observacional, y con apoyo de registros digitales y análisis informatizados. Los datos se analizaron mediante el programa Theme 6 beta, el cual permite detectar la estructura temporaly secuencial de las series de datos, poniendo de manifiesto patrones que regular o irregularmente ocurren repetidamente en un período de observación. El Theme ha detectado muchos patrones temporales (T-patterns en los partidos de fútbol analizados. Se hallaron notables diferencias entre los partidos ganados y perdidos. El número de distintos T-patterns detectados fue mayor para los partidos perdidos, y menor para los ganados, mientras que el número de eventos codificados fue similar. El programa Theme y los T-patterns mejoran las posibilidades investigadoras respecto a un análisis de rendimiento basado en la frecuencia, y hacen que esta metodología sea eficaz para la investigación y constituya un apoyo procedimental en el análisis del deporte. Nuestros resultados indican que se requieren posteriores investigaciones relativas a posibles conexiones entre la detección de estas estructuras temporales y las observaciones humanas respecto al rendimiento en el fútbol. Este planteamiento sería un apoyo tanto para los miembros de los equipos como para los entrenadores, permitiendo alcanzar una mejor comprensión de la dinámica del juego y aportando una información que no ofrecen los métodos tradicionales.

  16. Regularization of quantum gravity in the matrix model approach

    International Nuclear Information System (INIS)

    Ueda, Haruhiko

    1991-02-01

    We study divergence problem of the partition function in the matrix model approach for two-dimensional quantum gravity. We propose a new model V(φ) = 1/2Trφ 2 + g 4 /NTrφ 4 + g'/N 4 Tr(φ 4 ) 2 and show that in the sphere case it has no divergence problem and the critical exponent is of pure gravity. (author)

  17. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-10-06

    In this work, we propose a new regularization approach for linear least-squares problems with random matrices. In the proposed constrained perturbation regularization approach, an artificial perturbation matrix with a bounded norm is forced into the system model matrix. This perturbation is introduced to improve the singular-value structure of the model matrix and, hence, the solution of the estimation problem. Relying on the randomness of the model matrix, a number of deterministic equivalents from random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various estimated signal characteristics. In addition, simulations show that our approach is robust in the presence of model uncertainty.

  18. An investigation of p-normed sparsity evaluation for SS-LBP-based edge extraction

    Energy Technology Data Exchange (ETDEWEB)

    Chen-yi, Zhao [School of Physics, Northeast Normal University, Changchun (China); Jia-ning, Sun, E-mail: sunjn118@nenu.edu.cn [School of Mathematics and Statistics, Northeast Normal University, Changchun (China); Shuang, Qiao, E-mail: qiaos810@nenu.edu.cn [School of Physics, Northeast Normal University, Changchun (China)

    2017-02-01

    SS-LBP is a very efficient tool for fast edge extraction in digital radiography. In this paper, we introduce a p-normed sparsity evaluation strategy to improve and generalize the existing SS-LBP framework. To illustrate the feasibility of the proposed approach, several experimental results are presented. Comparisons show that 1-normed sparsity evaluation is more effective and robust in practical applications.

  19. Novel Harmonic Regularization Approach for Variable Selection in Cox’s Proportional Hazards Model

    Directory of Open Access Journals (Sweden)

    Ge-Jin Chu

    2014-01-01

    Full Text Available Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2regularizations, to select key risk factors in the Cox’s proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL, the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  20. A regularized matrix factorization approach to induce structured sparse-low-rank solutions in the EEG inverse problem

    DEFF Research Database (Denmark)

    Montoya-Martinez, Jair; Artes-Rodriguez, Antonio; Pontil, Massimiliano

    2014-01-01

    We consider the estimation of the Brain Electrical Sources (BES) matrix from noisy electroencephalographic (EEG) measurements, commonly named as the EEG inverse problem. We propose a new method to induce neurophysiological meaningful solutions, which takes into account the smoothness, structured...... sparsity, and low rank of the BES matrix. The method is based on the factorization of the BES matrix as a product of a sparse coding matrix and a dense latent source matrix. The structured sparse-low-rank structure is enforced by minimizing a regularized functional that includes the ℓ21-norm of the coding...... matrix and the squared Frobenius norm of the latent source matrix. We develop an alternating optimization algorithm to solve the resulting nonsmooth-nonconvex minimization problem. We analyze the convergence of the optimization procedure, and we compare, under different synthetic scenarios...

  1. Noise annoyance caused by continuous descent approaches compared to regular descent procedures

    NARCIS (Netherlands)

    White, K.; Arntzen, M.; Walker, F.; Waiyaki, F.M.; Meeter, M.; Bronkhorst, A.W.

    2017-01-01

    During Continuous Descent Approaches (CDAs) aircraft glide towards the runway resulting in reduced noise and fuel usage. Here, we investigated whether such landings cause less noise annoyance than a regular stepwise approach. Both landing types were compared in a controlled laboratory setting with a

  2. A Regularized Approach for Solving Magnetic Differential Equations and a Revised Iterative Equilibrium Algorithm

    International Nuclear Information System (INIS)

    Hudson, S.R.

    2010-01-01

    A method for approximately solving magnetic differential equations is described. The approach is to include a small diffusion term to the equation, which regularizes the linear operator to be inverted. The extra term allows a 'source-correction' term to be defined, which is generally required in order to satisfy the solvability conditions. The approach is described in the context of computing the pressure and parallel currents in the iterative approach for computing magnetohydrodynamic equilibria.

  3. Sparsity enabled cluster reduced-order models for control

    Science.gov (United States)

    Kaiser, Eurika; Morzyński, Marek; Daviller, Guillaume; Kutz, J. Nathan; Brunton, Bingni W.; Brunton, Steven L.

    2018-01-01

    Characterizing and controlling nonlinear, multi-scale phenomena are central goals in science and engineering. Cluster-based reduced-order modeling (CROM) was introduced to exploit the underlying low-dimensional dynamics of complex systems. CROM builds a data-driven discretization of the Perron-Frobenius operator, resulting in a probabilistic model for ensembles of trajectories. A key advantage of CROM is that it embeds nonlinear dynamics in a linear framework, which enables the application of standard linear techniques to the nonlinear system. CROM is typically computed on high-dimensional data; however, access to and computations on this full-state data limit the online implementation of CROM for prediction and control. Here, we address this key challenge by identifying a small subset of critical measurements to learn an efficient CROM, referred to as sparsity-enabled CROM. In particular, we leverage compressive measurements to faithfully embed the cluster geometry and preserve the probabilistic dynamics. Further, we show how to identify fewer optimized sensor locations tailored to a specific problem that outperform random measurements. Both of these sparsity-enabled sensing strategies significantly reduce the burden of data acquisition and processing for low-latency in-time estimation and control. We illustrate this unsupervised learning approach on three different high-dimensional nonlinear dynamical systems from fluids with increasing complexity, with one application in flow control. Sparsity-enabled CROM is a critical facilitator for real-time implementation on high-dimensional systems where full-state information may be inaccessible.

  4. Refocusing criterion via sparsity measurements in digital holography.

    Science.gov (United States)

    Memmolo, Pasquale; Paturzo, Melania; Javidi, Bahram; Netti, Paolo A; Ferraro, Pietro

    2014-08-15

    Several automatic approaches have been proposed in the past to compute the refocus distance in digital holography (DH). However most of them are based on a maximization or minimization of a suitable amplitude image contrast measure, regarded as a function of the reconstruction distance parameter. Here we show that, by using the sparsity measure coefficient regarded as a refocusing criterion in the holographic reconstruction, it is possible to recover the focus plane and, at the same time, establish the degree of sparsity of digital holograms, when samples of the diffraction Fresnel propagation integral are used as a sparse signal representation. We employ a sparsity measurement coefficient known as Gini's index thus showing for the first time, to the best of our knowledge, its application in DH, as an effective refocusing criterion. Demonstration is provided for different holographic configurations (i.e., lens and lensless apparatus) and for completely different objects (i.e., a thin pure phase microscopic object as an in vitro cell, and macroscopic puppets) preparation.

  5. Kronecker-Basis-Representation Based Tensor Sparsity and Its Applications to Tensor Recovery.

    Science.gov (United States)

    Xie, Qi; Zhao, Qian; Meng, Deyu; Xu, Zongben

    2017-08-02

    It is well known that the sparsity/low-rank of a vector/matrix can be rationally measured by nonzero-entries-number ($l_0$ norm)/nonzero- singular-values-number (rank), respectively. However, data from real applications are often generated by the interaction of multiple factors, which obviously cannot be sufficiently represented by a vector/matrix, while a high order tensor is expected to provide more faithful representation to deliver the intrinsic structure underlying such data ensembles. Unlike the vector/matrix case, constructing a rational high order sparsity measure for tensor is a relatively harder task. To this aim, in this paper we propose a measure for tensor sparsity, called Kronecker-basis-representation based tensor sparsity measure (KBR briefly), which encodes both sparsity insights delivered by Tucker and CANDECOMP/PARAFAC (CP) low-rank decompositions for a general tensor. Then we study the KBR regularization minimization (KBRM) problem, and design an effective ADMM algorithm for solving it, where each involved parameter can be updated with closed-form equations. Such an efficient solver makes it possible to extend KBR to various tasks like tensor completion and tensor robust principal component analysis. A series of experiments, including multispectral image (MSI) denoising, MSI completion and background subtraction, substantiate the superiority of the proposed methods beyond state-of-the-arts.

  6. Study on highly efficient seismic data acquisition and processing methods based on sparsity constraint

    Science.gov (United States)

    Wang, H.; Chen, S.; Tao, C.; Qiu, L.

    2017-12-01

    High-density, high-fold and wide-azimuth seismic data acquisition methods are widely used to overcome the increasingly sophisticated exploration targets. The acquisition period is longer and longer and the acquisition cost is higher and higher. We carry out the study of highly efficient seismic data acquisition and processing methods based on sparse representation theory (or compressed sensing theory), and achieve some innovative results. The theoretical principles of highly efficient acquisition and processing is studied. We firstly reveal sparse representation theory based on wave equation. Then we study the highly efficient seismic sampling methods and present an optimized piecewise-random sampling method based on sparsity prior information. At last, a reconstruction strategy with the sparsity constraint is developed; A two-step recovery approach by combining sparsity-promoting method and hyperbolic Radon transform is also put forward. The above three aspects constitute the enhanced theory of highly efficient seismic data acquisition. The specific implementation strategies of highly efficient acquisition and processing are studied according to the highly efficient acquisition theory expounded in paragraph 2. Firstly, we propose the highly efficient acquisition network designing method by the help of optimized piecewise-random sampling method. Secondly, we propose two types of highly efficient seismic data acquisition methods based on (1) single sources and (2) blended (or simultaneous) sources. Thirdly, the reconstruction procedures corresponding to the above two types of highly efficient seismic data acquisition methods are proposed to obtain the seismic data on the regular acquisition network. A discussion of the impact on the imaging result of blended shooting is discussed. In the end, we implement the numerical tests based on Marmousi model. The achieved results show: (1) the theoretical framework of highly efficient seismic data acquisition and processing

  7. Material elemental decomposition in dual and multi-energy CT via a sparsity-dictionary approach for proton stopping power ratio calculation.

    Science.gov (United States)

    Shen, Chenyang; Li, Bin; Chen, Liyuan; Yang, Ming; Lou, Yifei; Jia, Xun

    2018-04-01

    Accurate calculation of proton stopping power ratio (SPR) relative to water is crucial to proton therapy treatment planning, since SPR affects prediction of beam range. Current standard practice derives SPR using a single CT scan. Recent studies showed that dual-energy CT (DECT) offers advantages to accurately determine SPR. One method to further improve accuracy is to incorporate prior knowledge on human tissue composition through a dictionary approach. In addition, it is also suggested that using CT images with multiple (more than two) energy channels, i.e., multi-energy CT (MECT), can further improve accuracy. In this paper, we proposed a sparse dictionary-based method to convert CT numbers of DECT or MECT to elemental composition (EC) and relative electron density (rED) for SPR computation. A dictionary was constructed to include materials generated based on human tissues of known compositions. For a voxel with CT numbers of different energy channels, its EC and rED are determined subject to a constraint that the resulting EC is a linear non-negative combination of only a few tissues in the dictionary. We formulated this as a non-convex optimization problem. A novel algorithm was designed to solve the problem. The proposed method has a unified structure to handle both DECT and MECT with different number of channels. We tested our method in both simulation and experimental studies. Average errors of SPR in experimental studies were 0.70% in DECT, 0.53% in MECT with three energy channels, and 0.45% in MECT with four channels. We also studied the impact of parameter values and established appropriate parameter values for our method. The proposed method can accurately calculate SPR using DECT and MECT. The results suggest that using more energy channels may improve the SPR estimation accuracy. © 2018 American Association of Physicists in Medicine.

  8. Sparsity and spectral properties of dual frames

    DEFF Research Database (Denmark)

    Krahmer, Felix; Kutyniok, Gitta; Lemvig, Jakob

    2013-01-01

    We study sparsity and spectral properties of dual frames of a given finite frame. We show that any finite frame has a dual with no more than $n^2$ non-vanishing entries, where $n$ denotes the ambient dimension, and that for most frames no sparser dual is possible. Moreover, we derive an expressio...

  9. Information operator approach and iterative regularization methods for atmospheric remote sensing

    International Nuclear Information System (INIS)

    Doicu, A.; Hilgers, S.; Bargen, A. von; Rozanov, A.; Eichmann, K.-U.; Savigny, C. von; Burrows, J.P.

    2007-01-01

    In this study, we present the main features of the information operator approach for solving linear inverse problems arising in atmospheric remote sensing. This method is superior to the stochastic version of the Tikhonov regularization (or the optimal estimation method) due to its capability to filter out the noise-dominated components of the solution generated by an inappropriate choice of the regularization parameter. We extend this approach to iterative methods for nonlinear ill-posed problems and derive the truncated versions of the Gauss-Newton and Levenberg-Marquardt methods. Although the paper mostly focuses on discussing the mathematical details of the inverse method, retrieval results have been provided, which exemplify the performances of the methods. These results correspond to the NO 2 retrieval from SCIAMACHY limb scatter measurements and have been obtained by using the retrieval processors developed at the German Aerospace Center Oberpfaffenhofen and Institute of Environmental Physics of the University of Bremen

  10. Traction cytometry: regularization in the Fourier approach and comparisons with finite element method.

    Science.gov (United States)

    Kulkarni, Ankur H; Ghosh, Prasenjit; Seetharaman, Ashwin; Kondaiah, Paturu; Gundiah, Namrata

    2018-05-09

    Traction forces exerted by adherent cells are quantified using displacements of embedded markers on polyacrylamide substrates due to cell contractility. Fourier Transform Traction Cytometry (FTTC) is widely used to calculate tractions but has inherent limitations due to errors in the displacement fields; these are mitigated through a regularization parameter (γ) in the Reg-FTTC method. An alternate finite element (FE) approach computes tractions on a domain using known boundary conditions. Robust verification and recovery studies are lacking but essential in assessing the accuracy and noise sensitivity of the traction solutions from the different methods. We implemented the L2 regularization method and defined a maximum curvature point in the traction with γ plot as the optimal regularization parameter (γ*) in the Reg-FTTC approach. Traction reconstructions using γ* yield accurate values of low and maximum tractions (Tmax) in the presence of up to 5% noise. Reg-FTTC is hence a clear improvement over the FTTC method but is inadequate to reconstruct low stresses such as those at nascent focal adhesions. FE, implemented using a node-by-node comparison, showed an intermediate reconstruction compared to Reg-FTTC. We performed experiments using mouse embryonic fibroblast (MEF) and compared results between these approaches. Tractions from FTTC and FE showed differences of ∼92% and 22% as compared to Reg-FTTC. Selection of an optimum value of γ for each cell reduced variability in the computed tractions as compared to using a single value of γ for all the MEF cells in this study.

  11. On a continuation approach in Tikhonov regularization and its application in piecewise-constant parameter identification

    International Nuclear Information System (INIS)

    Melicher, V; Vrábel’, V

    2013-01-01

    We present a new approach to the convexification of the Tikhonov regularization using a continuation method strategy. We embed the original minimization problem into a one-parameter family of minimization problems. Both the penalty term and the minimizer of the Tikhonov functional become dependent on a continuation parameter. In this way we can independently treat two main roles of the regularization term, which are the stabilization of the ill-posed problem and introduction of the a priori knowledge. For zero continuation parameter we solve a relaxed regularization problem, which stabilizes the ill-posed problem in a weaker sense. The problem is recast to the original minimization by the continuation method and so the a priori knowledge is enforced. We apply this approach in the context of topology-to-shape geometry identification, where it allows us to avoid the convergence of gradient-based methods to a local minima. We present illustrative results for magnetic induction tomography which is an example of PDE-constrained inverse problem. (paper)

  12. Sparsity in Linear Predictive Coding of Speech

    DEFF Research Database (Denmark)

    Giacobello, Daniele

    of the effectiveness of their application in audio processing. The second part of the thesis deals with introducing sparsity directly in the linear prediction analysis-by-synthesis (LPAS) speech coding paradigm. We first propose a novel near-optimal method to look for a sparse approximate excitation using a compressed...... one with direct applications to coding but also consistent with the speech production model of voiced speech, where the excitation of the all-pole filter can be modeled as an impulse train, i.e., a sparse sequence. Introducing sparsity in the LP framework will also bring to de- velop the concept...... sensing formulation. Furthermore, we define a novel re-estimation procedure to adapt the predictor coefficients to the given sparse excitation, balancing the two representations in the context of speech coding. Finally, the advantages of the compact parametric representation of a segment of speech, given...

  13. A Faith-Based and Cultural Approach to Promoting Self-Efficacy and Regular Exercise in Older African American Women

    Science.gov (United States)

    Quinn, Mary Ellen; Guion, W. Kent

    2010-01-01

    The health benefits of regular exercise are well documented, yet there has been limited success in the promotion of regular exercise in older African American women. Based on theoretical and evidence-based findings, the authors recommend a behavioral self-efficacy approach to guide exercise interventions in this high-risk population. Interventions…

  14. Elastic-net regularization approaches for genome-wide association studies of rheumatoid arthritis.

    Science.gov (United States)

    Cho, Seoae; Kim, Haseong; Oh, Sohee; Kim, Kyunga; Park, Taesung

    2009-12-15

    The current trend in genome-wide association studies is to identify regions where the true disease-causing genes may lie by evaluating thousands of single-nucleotide polymorphisms (SNPs) across the whole genome. However, many challenges exist in detecting disease-causing genes among the thousands of SNPs. Examples include multicollinearity and multiple testing issues, especially when a large number of correlated SNPs are simultaneously tested. Multicollinearity can often occur when predictor variables in a multiple regression model are highly correlated, and can cause imprecise estimation of association. In this study, we propose a simple stepwise procedure that identifies disease-causing SNPs simultaneously by employing elastic-net regularization, a variable selection method that allows one to address multicollinearity. At Step 1, the single-marker association analysis was conducted to screen SNPs. At Step 2, the multiple-marker association was scanned based on the elastic-net regularization. The proposed approach was applied to the rheumatoid arthritis (RA) case-control data set of Genetic Analysis Workshop 16. While the selected SNPs at the screening step are located mostly on chromosome 6, the elastic-net approach identified putative RA-related SNPs on other chromosomes in an increased proportion. For some of those putative RA-related SNPs, we identified the interactions with sex, a well known factor affecting RA susceptibility.

  15. Zeroth order regular approximation approach to electric dipole moment interactions of the electron

    Science.gov (United States)

    Gaul, Konstantin; Berger, Robert

    2017-07-01

    A quasi-relativistic two-component approach for an efficient calculation of P ,T -odd interactions caused by a permanent electric dipole moment of the electron (eEDM) is presented. The approach uses a (two-component) complex generalized Hartree-Fock and a complex generalized Kohn-Sham scheme within the zeroth order regular approximation. In applications to select heavy-elemental polar diatomic molecular radicals, which are promising candidates for an eEDM experiment, the method is compared to relativistic four-component electron-correlation calculations and confirms values for the effective electric field acting on the unpaired electron for RaF, BaF, YbF, and HgF. The calculations show that purely relativistic effects, involving only the lower component of the Dirac bi-spinor, are well described by treating only the upper component explicitly.

  16. Combined Tensor Fitting and TV Regularization in Diffusion Tensor Imaging Based on a Riemannian Manifold Approach.

    Science.gov (United States)

    Baust, Maximilian; Weinmann, Andreas; Wieczorek, Matthias; Lasser, Tobias; Storath, Martin; Navab, Nassir

    2016-08-01

    In this paper, we consider combined TV denoising and diffusion tensor fitting in DTI using the affine-invariant Riemannian metric on the space of diffusion tensors. Instead of first fitting the diffusion tensors, and then denoising them, we define a suitable TV type energy functional which incorporates the measured DWIs (using an inverse problem setup) and which measures the nearness of neighboring tensors in the manifold. To approach this functional, we propose generalized forward- backward splitting algorithms which combine an explicit and several implicit steps performed on a decomposition of the functional. We validate the performance of the derived algorithms on synthetic and real DTI data. In particular, we work on real 3D data. To our knowledge, the present paper describes the first approach to TV regularization in a combined manifold and inverse problem setup.

  17. Accelerated Simulation of Kinetic Transport Using Variational Principles and Sparsity

    Energy Technology Data Exchange (ETDEWEB)

    Caflisch, Russel [Univ. of California, Los Angeles, CA (United States)

    2017-06-30

    This project is centered on the development and application of techniques of sparsity and compressed sensing for variational principles, PDEs and physics problems, in particular for kinetic transport. This included derivation of sparse modes for elliptic and parabolic problems coming from variational principles. The research results of this project are on methods for sparsity in differential equations and their applications and on application of sparsity ideas to kinetic transport of plasmas.

  18. ENDMEMBER EXTRACTION OF HIGHLY MIXED DATA USING L1 SPARSITY-CONSTRAINED MULTILAYER NONNEGATIVE MATRIX FACTORIZATION

    Directory of Open Access Journals (Sweden)

    H. Fang

    2018-04-01

    Full Text Available Due to the limited spatial resolution of remote hyperspectral sensors, pixels are usually highly mixed in the hyperspectral images. Endmember extraction refers to the process identifying the pure endmember signatures from the mixture, which is an important step towards the utilization of hyperspectral data. Nonnegative matrix factorization (NMF is a widely used method of endmember extraction due to its effectiveness and convenience. While most NMF-based methods have single-layer structures, which may have difficulties in effectively learning the structures of highly mixed and complex data. On the other hand, multilayer algorithms have shown great advantages in learning data features and been widely studied in many fields. In this paper, we presented a L1 sparsityconstrained multilayer NMF method for endmember extraction of highly mixed data. Firstly, the multilayer NMF structure was obtained by unfolding NMF into a certain number of layers. In each layer, the abundance matrix was decomposed into the endmember matrix and abundance matrix of the next layer. Besides, to improve the performance of NMF, we incorporated sparsity constraints to the multilayer NMF model by adding a L1 regularizer of the abundance matrix to each layer. At last, a layer-wise optimization method based on NeNMF was proposed to train the multilayer NMF structure. Experiments were conducted on both synthetic data and real data. The results demonstrate that our proposed algorithm can achieve better results than several state-of-art approaches.

  19. Age, period, and cohort analysis of regular dental care behavior and edentulism: A marginal approach

    Science.gov (United States)

    2011-01-01

    Background To analyze the regular dental care behavior and prevalence of edentulism in adult Danes, reported in sequential cross-sectional oral health surveys by the application of a marginal approach to consider the possible clustering effect of birth cohorts. Methods Data from four sequential cross-sectional surveys of non-institutionalized Danes conducted from 1975-2005 comprising 4330 respondents aged 15+ years in 9 birth cohorts were analyzed. The key study variables were seeking dental care on an annual basis (ADC) and edentulism. For the analysis of ADC, survey year, age, gender, socio-economic status (SES) group, denture-wearing, and school dental care (SDC) during childhood were considered. For the analysis of edentulism, only respondents aged 35+ years were included. Survey year, age, gender, SES group, ADC, and SDC during childhood were considered as the independent factors. To take into account the clustering effect of birth cohorts, marginal logistic regressions with an independent correlation structure in generalized estimating equations (GEE) were carried out, with PROC GENMOD in SAS software. Results The overall proportion of people seeking ADC increased from 58.8% in 1975 to 86.7% in 2005, while for respondents aged 35 years or older, the overall prevalence of edentulism (35+ years) decreased from 36.4% in 1975 to 5.0% in 2005. Females, respondents in the higher SES group, in more recent survey years, with no denture, and receiving SDC in all grades during childhood were associated with higher probability of seeking ADC regularly (P dental health policy was demonstrated by a continued increase of regular dental visiting habits and tooth retention in adults because school dental care was provided to Danes in their childhood. PMID:21410991

  20. Parameter identification in ODE models with oscillatory dynamics: a Fourier regularization approach

    Science.gov (United States)

    Chiara D'Autilia, Maria; Sgura, Ivonne; Bozzini, Benedetto

    2017-12-01

    In this paper we consider a parameter identification problem (PIP) for data oscillating in time, that can be described in terms of the dynamics of some ordinary differential equation (ODE) model, resulting in an optimization problem constrained by the ODEs. In problems with this type of data structure, simple application of the direct method of control theory (discretize-then-optimize) yields a least-squares cost function exhibiting multiple ‘low’ minima. Since in this situation any optimization algorithm is liable to fail in the approximation of a good solution, here we propose a Fourier regularization approach that is able to identify an iso-frequency manifold {{ S}} of codimension-one in the parameter space \

  1. Zeta-function regularization approach to finite temperature effects in Kaluza-Klein space-times

    International Nuclear Information System (INIS)

    Bytsenko, A.A.; Vanzo, L.; Zerbini, S.

    1992-01-01

    In the framework of heat-kernel approach to zeta-function regularization, in this paper the one-loop effective potential at finite temperature for scalar and spinor fields on Kaluza-Klein space-time of the form M p x M c n , where M p is p-dimensional Minkowski space-time is evaluated. In particular, when the compact manifold is M c n = H n /Γ, the Selberg tracer formula associated with discrete torsion-free group Γ of the n-dimensional Lobachevsky space H n is used. An explicit representation for the thermodynamic potential valid for arbitrary temperature is found. As a result a complete high temperature expansion is presented and the roles of zero modes and topological contributions is discussed

  2. Distribution Agnostic Structured Sparsity Recovery: Algorithms and Applications

    KAUST Repository

    Masood, Mudassir

    2015-05-01

    Compressed sensing has been a very active area of research and several elegant algorithms have been developed for the recovery of sparse signals in the past few years. However, most of these algorithms are either computationally expensive or make some assumptions that are not suitable for all real world problems. Recently, focus has shifted to Bayesian-based approaches that are able to perform sparse signal recovery at much lower complexity while invoking constraint and/or a priori information about the data. While Bayesian approaches have their advantages, these methods must have access to a priori statistics. Usually, these statistics are unknown and are often difficult or even impossible to predict. An effective workaround is to assume a distribution which is typically considered to be Gaussian, as it makes many signal processing problems mathematically tractable. Seemingly attractive, this assumption necessitates the estimation of the associated parameters; which could be hard if not impossible. In the thesis, we focus on this aspect of Bayesian recovery and present a framework to address the challenges mentioned above. The proposed framework allows Bayesian recovery of sparse signals but at the same time is agnostic to the distribution of the unknown sparse signal components. The algorithms based on this framework are agnostic to signal statistics and utilize a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data if not available. In the thesis, we propose several algorithms based on this framework which utilize the structure present in signals for improved recovery. In addition to the algorithm that considers just the sparsity structure of sparse signals, tools that target additional structure of the sparsity recovery problem are proposed. These include several algorithms for a) block-sparse signal estimation, b) joint reconstruction of several common support sparse signals, and c

  3. A Sparsity-Promoted Decomposition for Compressed Fault Diagnosis of Roller Bearings

    Directory of Open Access Journals (Sweden)

    Huaqing Wang

    2016-09-01

    Full Text Available The traditional approaches for condition monitoring of roller bearings are almost always achieved under Shannon sampling theorem conditions, leading to a big-data problem. The compressed sensing (CS theory provides a new solution to the big-data problem. However, the vibration signals are insufficiently sparse and it is difficult to achieve sparsity using the conventional techniques, which impedes the application of CS theory. Therefore, it is of great significance to promote the sparsity when applying the CS theory to fault diagnosis of roller bearings. To increase the sparsity of vibration signals, a sparsity-promoted method called the tunable Q-factor wavelet transform based on decomposing the analyzed signals into transient impact components and high oscillation components is utilized in this work. The former become sparser than the raw signals with noise eliminated, whereas the latter include noise. Thus, the decomposed transient impact components replace the original signals for analysis. The CS theory is applied to extract the fault features without complete reconstruction, which means that the reconstruction can be completed when the components with interested frequencies are detected and the fault diagnosis can be achieved during the reconstruction procedure. The application cases prove that the CS theory assisted by the tunable Q-factor wavelet transform can successfully extract the fault features from the compressed samples.

  4. A projection-based approach to general-form Tikhonov regularization

    DEFF Research Database (Denmark)

    Kilmer, Misha E.; Hansen, Per Christian; Espanol, Malena I.

    2007-01-01

    We present a projection-based iterative algorithm for computing general-form Tikhonov regularized solutions to the problem minx| Ax-b |2^2+lambda2| Lx |2^2, where the regularization matrix L is not the identity. Our algorithm is designed for the common case where lambda is not known a priori...

  5. A regularized, model-based approach to phase-based conductivity mapping using MRI.

    Science.gov (United States)

    Ropella, Kathleen M; Noll, Douglas C

    2017-11-01

    To develop a novel regularized, model-based approach to phase-based conductivity mapping that uses structural information to improve the accuracy of conductivity maps. The inverse of the three-dimensional Laplacian operator is used to model the relationship between measured phase maps and the object conductivity in a penalized weighted least-squares optimization problem. Spatial masks based on structural information are incorporated into the problem to preserve data near boundaries. The proposed Inverse Laplacian method was compared against a restricted Gaussian filter in simulation, phantom, and human experiments. The Inverse Laplacian method resulted in lower reconstruction bias and error due to noise in simulations than the Gaussian filter. The Inverse Laplacian method also produced conductivity maps closer to the measured values in a phantom and with reduced noise in the human brain, as compared to the Gaussian filter. The Inverse Laplacian method calculates conductivity maps with less noise and more accurate values near boundaries. Improving the accuracy of conductivity maps is integral for advancing the applications of conductivity mapping. Magn Reson Med 78:2011-2021, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  6. Selection of regularization parameter for l1-regularized damage detection

    Science.gov (United States)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  7. Regular and promotional sales in new product life-cycle: A competitive approach

    OpenAIRE

    Guidolin, Mariangela; Guseo, Renato; Mortarino, Cinzia

    2016-01-01

    In this paper, we consider the application of the Lotka-Volterra model with churn effects, LVch, (Guidolin and Guseo, 2015) to the case of a confectionary product produced in Italy and recently commercialized in a European country. Weekly time series, referring separately to quantities of regular and promotional sales, are available. Their joint inspection highlighted the presence of compensatory dynamics suggesting the study with the LVch to estimate whether competition between regular and p...

  8. A novel approach of ensuring layout regularity correct by construction in advanced technologies

    Science.gov (United States)

    Ahmed, Shafquat Jahan; Vaderiya, Yagnesh; Gupta, Radhika; Parthasarathy, Chittoor; Marin, Jean-Claude; Robert, Frederic

    2017-03-01

    In advanced technology nodes, layout regularity has become a mandatory prerequisite to create robust designs less sensitive to variations in manufacturing process in order to improve yield and minimizing electrical variability. In this paper we describe a method for designing regular full custom layouts based on design and process co-optimization. The method includes various design rule checks that can be used on-the-fly during leaf-cell layout development. We extract a Layout Regularity Index (LRI) from the layouts based on the jogs, alignments and pitches used in the design for any given metal layer. Regularity Index of a layout is the direct indicator of manufacturing yield and is used to compare the relative health of different layout blocks in terms of process friendliness. The method has been deployed for 28nm and 40nm technology nodes for Memory IP and is being extended to other IPs (IO, standard-cell). We have quantified the gain of layout regularity with the deployed method on printability and electrical characteristics by process-variation (PV) band simulation analysis and have achieved up-to 5nm reduction in PV band.

  9. Sparsity-weighted outlier FLOODing (OFLOOD) method: Efficient rare event sampling method using sparsity of distribution.

    Science.gov (United States)

    Harada, Ryuhei; Nakamura, Tomotake; Shigeta, Yasuteru

    2016-03-30

    As an extension of the Outlier FLOODing (OFLOOD) method [Harada et al., J. Comput. Chem. 2015, 36, 763], the sparsity of the outliers defined by a hierarchical clustering algorithm, FlexDice, was considered to achieve an efficient conformational search as sparsity-weighted "OFLOOD." In OFLOOD, FlexDice detects areas of sparse distribution as outliers. The outliers are regarded as candidates that have high potential to promote conformational transitions and are employed as initial structures for conformational resampling by restarting molecular dynamics simulations. When detecting outliers, FlexDice defines a rank in the hierarchy for each outlier, which relates to sparsity in the distribution. In this study, we define a lower rank (first ranked), a medium rank (second ranked), and the highest rank (third ranked) outliers, respectively. For instance, the first-ranked outliers are located in a given conformational space away from the clusters (highly sparse distribution), whereas those with the third-ranked outliers are nearby the clusters (a moderately sparse distribution). To achieve the conformational search efficiently, resampling from the outliers with a given rank is performed. As demonstrations, this method was applied to several model systems: Alanine dipeptide, Met-enkephalin, Trp-cage, T4 lysozyme, and glutamine binding protein. In each demonstration, the present method successfully reproduced transitions among metastable states. In particular, the first-ranked OFLOOD highly accelerated the exploration of conformational space by expanding the edges. In contrast, the third-ranked OFLOOD reproduced local transitions among neighboring metastable states intensively. For quantitatively evaluations of sampled snapshots, free energy calculations were performed with a combination of umbrella samplings, providing rigorous landscapes of the biomolecules. © 2015 Wiley Periodicals, Inc.

  10. Sparsity-based multi-height phase recovery in holographic microscopy

    Science.gov (United States)

    Rivenson, Yair; Wu, Yichen; Wang, Hongda; Zhang, Yibo; Feizi, Alborz; Ozcan, Aydogan

    2016-11-01

    High-resolution imaging of densely connected samples such as pathology slides using digital in-line holographic microscopy requires the acquisition of several holograms, e.g., at >6-8 different sample-to-sensor distances, to achieve robust phase recovery and coherent imaging of specimen. Reducing the number of these holographic measurements would normally result in reconstruction artifacts and loss of image quality, which would be detrimental especially for biomedical and diagnostics-related applications. Inspired by the fact that most natural images are sparse in some domain, here we introduce a sparsity-based phase reconstruction technique implemented in wavelet domain to achieve at least 2-fold reduction in the number of holographic measurements for coherent imaging of densely connected samples with minimal impact on the reconstructed image quality, quantified using a structural similarity index. We demonstrated the success of this approach by imaging Papanicolaou smears and breast cancer tissue slides over a large field-of-view of ~20 mm2 using 2 in-line holograms that are acquired at different sample-to-sensor distances and processed using sparsity-based multi-height phase recovery. This new phase recovery approach that makes use of sparsity can also be extended to other coherent imaging schemes, involving e.g., multiple illumination angles or wavelengths to increase the throughput and speed of coherent imaging.

  11. Sparsity-based multi-height phase recovery in holographic microscopy

    KAUST Repository

    Rivenson, Yair

    2016-11-30

    High-resolution imaging of densely connected samples such as pathology slides using digital in-line holographic microscopy requires the acquisition of several holograms, e.g., at >6–8 different sample-to-sensor distances, to achieve robust phase recovery and coherent imaging of specimen. Reducing the number of these holographic measurements would normally result in reconstruction artifacts and loss of image quality, which would be detrimental especially for biomedical and diagnostics-related applications. Inspired by the fact that most natural images are sparse in some domain, here we introduce a sparsity-based phase reconstruction technique implemented in wavelet domain to achieve at least 2-fold reduction in the number of holographic measurements for coherent imaging of densely connected samples with minimal impact on the reconstructed image quality, quantified using a structural similarity index. We demonstrated the success of this approach by imaging Papanicolaou smears and breast cancer tissue slides over a large field-of-view of ~20 mm2 using 2 in-line holograms that are acquired at different sample-to-sensor distances and processed using sparsity-based multi-height phase recovery. This new phase recovery approach that makes use of sparsity can also be extended to other coherent imaging schemes, involving e.g., multiple illumination angles or wavelengths to increase the throughput and speed of coherent imaging.

  12. Regularization methods in Banach spaces

    CERN Document Server

    Schuster, Thomas; Hofmann, Bernd; Kazimierski, Kamil S

    2012-01-01

    Regularization methods aimed at finding stable approximate solutions are a necessary tool to tackle inverse and ill-posed problems. Usually the mathematical model of an inverse problem consists of an operator equation of the first kind and often the associated forward operator acts between Hilbert spaces. However, for numerous problems the reasons for using a Hilbert space setting seem to be based rather on conventions than on an approprimate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, sparsity constraints using general Lp-norms or the B

  13. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    Science.gov (United States)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal

  14. Pauli-Villars regularization in nonperturbative Hamiltonian approach on the light front

    Energy Technology Data Exchange (ETDEWEB)

    Malyshev, M. Yu., E-mail: mimalysh@yandex.ru; Paston, S. A.; Prokhvatilov, E. V.; Zubov, R. A.; Franke, V. A. [Saint Petersburg State University, Saint Petersburg (Russian Federation)

    2016-01-22

    The advantage of Pauli-Villars regularization in quantum field theory quantized on the light front is explained. Simple examples of scalar λφ{sup 4} field theory and Yukawa-type model are used. We give also an example of nonperturbative calculation in the theory with Pauli-Villars fields, using for that a model of anharmonic oscillator modified by inclusion of ghost variables playing the role similar to Pauli-Villars fields.

  15. Laplacian manifold regularization method for fluorescence molecular tomography

    Science.gov (United States)

    He, Xuelei; Wang, Xiaodong; Yi, Huangjian; Chen, Yanrong; Zhang, Xu; Yu, Jingjing; He, Xiaowei

    2017-04-01

    Sparse regularization methods have been widely used in fluorescence molecular tomography (FMT) for stable three-dimensional reconstruction. Generally, ℓ1-regularization-based methods allow for utilizing the sparsity nature of the target distribution. However, in addition to sparsity, the spatial structure information should be exploited as well. A joint ℓ1 and Laplacian manifold regularization model is proposed to improve the reconstruction performance, and two algorithms (with and without Barzilai-Borwein strategy) are presented to solve the regularization model. Numerical studies and in vivo experiment demonstrate that the proposed Gradient projection-resolved Laplacian manifold regularization method for the joint model performed better than the comparative algorithm for ℓ1 minimization method in both spatial aggregation and location accuracy.

  16. A general approach to regularizing inverse problems with regional data using Slepian wavelets

    Science.gov (United States)

    Michel, Volker; Simons, Frederik J.

    2017-12-01

    Slepian functions are orthogonal function systems that live on subdomains (for example, geographical regions on the Earth’s surface, or bandlimited portions of the entire spectrum). They have been firmly established as a useful tool for the synthesis and analysis of localized (concentrated or confined) signals, and for the modeling and inversion of noise-contaminated data that are only regionally available or only of regional interest. In this paper, we consider a general abstract setup for inverse problems represented by a linear and compact operator between Hilbert spaces with a known singular-value decomposition (svd). In practice, such an svd is often only given for the case of a global expansion of the data (e.g. on the whole sphere) but not for regional data distributions. We show that, in either case, Slepian functions (associated to an arbitrarily prescribed region and the given compact operator) can be determined and applied to construct a regularization for the ill-posed regional inverse problem. Moreover, we describe an algorithm for constructing the Slepian basis via an algebraic eigenvalue problem. The obtained Slepian functions can be used to derive an svd for the combination of the regionalizing projection and the compact operator. As a result, standard regularization techniques relying on a known svd become applicable also to those inverse problems where the data are regionally given only. In particular, wavelet-based multiscale techniques can be used. An example for the latter case is elaborated theoretically and tested on two synthetic numerical examples.

  17. Topology Identification of Coupling Map Lattice under Sparsity Condition

    Directory of Open Access Journals (Sweden)

    Jiangni Yu

    2015-01-01

    Full Text Available Coupling map lattice is an efficient mathematical model for studying complex systems. This paper studies the topology identification of coupled map lattice (CML under the sparsity condition. We convert the identification problem into the problem of solving the underdetermined linear equations. The l1 norm method is used to solve the underdetermined equations. The requirement of data characters and sampling times are discussed in detail. We find that the high entropy and small coupling coefficient data are suitable for the identification. When the measurement time is more than 2.86 times sparsity, the accuracy of identification can reach an acceptable level. And when the measurement time reaches 4 times sparsity, we can receive a fairly good accuracy.

  18. Iterative approach to self-adapting and altitude-dependent regularization for atmospheric profile retrievals.

    Science.gov (United States)

    Ridolfi, Marco; Sgheri, Luca

    2011-12-19

    In this paper we present the IVS (Iterative Variable Strength) method, an altitude-dependent, self-adapting Tikhonov regularization scheme for atmospheric profile retrievals. The method is based on a similar scheme we proposed in 2009. The new method does not need any specifically tuned minimization routine, hence it is more robust and faster. We test the self-consistency of the method using simulated observations of the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS). We then compare the new method with both our previous scheme and the scalar method currently implemented in the MIPAS on-line processor, using both synthetic and real atmospheric limb measurements. The IVS method shows very good performances.

  19. Exercise and College Students: How Regular Exercise Contributes to Approach Motivation via Self-Efficacy

    Science.gov (United States)

    Weinkauff Duranso, Christine M.

    2017-01-01

    There is evidence that participating in physical exercise reduces stress and the risk of many physical maladies. Exercise is also correlated with higher levels of approach motivation, or a tendency to approach challenge as an opportunity for growth or improvement instead of an opportunity for failure. To date, most research on this relationship…

  20. Comparison of methodological approaches to identify economic activity regularities in transition economy

    Directory of Open Access Journals (Sweden)

    Jitka Poměnková

    2011-01-01

    Full Text Available Presented paper focuses on consideration and evaluation of methodical approaches to analyze cyclical structure character of economic activity in transition economy. As a starting point, work in time domain is applied, which is followed in frequency domain approach. Both approaches are viewed from methodical as well as application point of view and their advantage and disadvantage are discussed. Consequently, time-frequency domain approach is added and applied on real data. On the basis of obtained results recommendation is formulated. All discussed methodical approaches are also considered from the perspective of capability to evaluate behaving of business cycle in time of global economic crisis before/after year 2008. The empirical part of the paper deals with data of gross domestic product in the Czech Republic in 1996/Q1–2010/Q2.

  1. Multiple Speech Source Separation Using Inter-Channel Correlation and Relaxed Sparsity

    Directory of Open Access Journals (Sweden)

    Maoshen Jia

    2018-01-01

    Full Text Available In this work, a multiple speech source separation method using inter-channel correlation and relaxed sparsity is proposed. A B-format microphone with four spatially located channels is adopted due to the size of the microphone array to preserve the spatial parameter integrity of the original signal. Specifically, we firstly measure the proportion of overlapped components among multiple sources and find that there exist many overlapped time-frequency (TF components with increasing source number. Then, considering the relaxed sparsity of speech sources, we propose a dynamic threshold-based separation approach of sparse components where the threshold is determined by the inter-channel correlation among the recording signals. After conducting a statistical analysis of the number of active sources at each TF instant, a form of relaxed sparsity called the half-K assumption is proposed so that the active source number in a certain TF bin does not exceed half the total number of simultaneously occurring sources. By applying the half-K assumption, the non-sparse components are recovered by regarding the extracted sparse components as a guide, combined with vector decomposition and matrix factorization. Eventually, the final TF coefficients of each source are recovered by the synthesis of sparse and non-sparse components. The proposed method has been evaluated using up to six simultaneous speech sources under both anechoic and reverberant conditions. Both objective and subjective evaluations validated that the perceptual quality of the separated speech by the proposed approach outperforms existing blind source separation (BSS approaches. Besides, it is robust to different speeches whilst confirming all the separated speeches with similar perceptual quality.

  2. EIT Imaging Regularization Based on Spectral Graph Wavelets.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut

    2017-09-01

    The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.

  3. Convex relaxations of spectral sparsity for robust super-resolution and line spectrum estimation

    Science.gov (United States)

    Chi, Yuejie

    2017-08-01

    We consider recovering the amplitudes and locations of spikes in a point source signal from its low-pass spectrum that may suffer from missing data and arbitrary outliers. We first review and provide a unified view of several recently proposed convex relaxations that characterize and capitalize the spectral sparsity of the point source signal without discretization under the framework of atomic norms. Next we propose a new algorithm when the spikes are known a priori to be positive, motivated by applications such as neural spike sorting and fluorescence microscopy imaging. Numerical experiments are provided to demonstrate the effectiveness of the proposed approach.

  4. Elliptic boundary value problems with fractional regularity data the first order approach

    CERN Document Server

    Amenta, Alex

    2018-01-01

    In this monograph the authors study the well-posedness of boundary value problems of Dirichlet and Neumann type for elliptic systems on the upper half-space with coefficients independent of the transversal variable and with boundary data in fractional Hardy-Sobolev and Besov spaces. The authors use the so-called "first order approach" which uses minimal assumptions on the coefficients and thus allows for complex coefficients and for systems of equations. This self-contained exposition of the first order approach offers new results with detailed proofs in a clear and accessible way and will become a valuable reference for graduate students and researchers working in partial differential equations and harmonic analysis.

  5. Multiple Feature Fusion Based on Co-Training Approach and Time Regularization for Place Classification in Wearable Video

    Directory of Open Access Journals (Sweden)

    Vladislavs Dovgalecs

    2013-01-01

    Full Text Available The analysis of video acquired with a wearable camera is a challenge that multimedia community is facing with the proliferation of such sensors in various applications. In this paper, we focus on the problem of automatic visual place recognition in a weakly constrained environment, targeting the indexing of video streams by topological place recognition. We propose to combine several machine learning approaches in a time regularized framework for image-based place recognition indoors. The framework combines the power of multiple visual cues and integrates the temporal continuity information of video. We extend it with computationally efficient semisupervised method leveraging unlabeled video sequences for an improved indexing performance. The proposed approach was applied on challenging video corpora. Experiments on a public and a real-world video sequence databases show the gain brought by the different stages of the method.

  6. A Regularizer Approach for RBF Networks Under the Concurrent Weight Failure Situation.

    Science.gov (United States)

    Leung, Chi-Sing; Wan, Wai Yan; Feng, Ruibin

    2017-06-01

    Many existing results on fault-tolerant algorithms focus on the single fault source situation, where a trained network is affected by one kind of weight failure. In fact, a trained network may be affected by multiple kinds of weight failure. This paper first studies how the open weight fault and the multiplicative weight noise degrade the performance of radial basis function (RBF) networks. Afterward, we define the objective function for training fault-tolerant RBF networks. Based on the objective function, we then develop two learning algorithms, one batch mode and one online mode. Besides, the convergent conditions of our online algorithm are investigated. Finally, we develop a formula to estimate the test set error of faulty networks trained from our approach. This formula helps us to optimize some tuning parameters, such as RBF width.

  7. FBRDLR: Fast blind reconstruction approach with dictionary learning regularization for infrared microscopy spectra

    Science.gov (United States)

    Liu, Tingting; Liu, Hai; Chen, Zengzhao; Chen, Yingying; Wang, Shengming; Liu, Zhi; Zhang, Hao

    2018-05-01

    Infrared (IR) spectra are the fingerprints of the molecules, and the spectral band location closely relates to the structure of a molecule. Thus, specimen identification can be performed based on IR spectroscopy. However, spectrally overlapping components prevent the specific identification of hyperfine molecular information of different substances. In this paper, we propose a fast blind reconstruction approach for IR spectra, which is based on sparse and redundant representations over a dictionary. The proposed method recovers the spectrum with the discrete wavelet transform dictionary on its content. The experimental results demonstrate that the proposed method is superior because of the better performance when compared with other state-of-the-art methods. The method the authors used remove the instrument aging issue to a large extent, thus leading the reconstruction IR spectra a more convenient tool for extracting features of an unknown material and interpreting it.

  8. Near-field acoustic imaging based on Laplacian sparsity

    DEFF Research Database (Denmark)

    Fernandez Grande, Efren; Daudet, Laurent

    2016-01-01

    We present a sound source identification method for near-field acoustic imaging of extended sources. The methodology is based on a wave superposition method (or equivalent source method) that promotes solutions with sparse higher order spatial derivatives. Instead of promoting direct sparsity......, and the validity of the wave extrapolation used for the reconstruction is examined. It is shown that this methodology can overcome conventional limits of spatial sampling, and is therefore valid for wide-band acoustic imaging of extended sources....

  9. A New Pansharpening Method Based on Spatial and Spectral Sparsity Priors.

    Science.gov (United States)

    He, Xiyan; Condat, Laurent; Bioucas-Diaz, Jose; Chanussot, Jocelyn; Xia, Junshi

    2014-06-27

    The development of multisensor systems in recent years has led to great increase in the amount of available remote sensing data. Image fusion techniques aim at inferring high quality images of a given area from degraded versions of the same area obtained by multiple sensors. This paper focuses on pansharpening, which is the inference of a high spatial resolution multispectral image from two degraded versions with complementary spectral and spatial resolution characteristics: a) a low spatial resolution multispectral image; and b) a high spatial resolution panchromatic image. We introduce a new variational model based on spatial and spectral sparsity priors for the fusion. In the spectral domain we encourage low-rank structure, whereas in the spatial domain we promote sparsity on the local differences. Given the fact that both panchromatic and multispectral images are integrations of the underlying continuous spectra using different channel responses, we propose to exploit appropriate regularizations based on both spatial and spectral links between panchromatic and the fused multispectral images. A weighted version of the vector Total Variation (TV) norm of the data matrix is employed to align the spatial information of the fused image with that of the panchromatic image. With regard to spectral information, two different types of regularization are proposed to promote a soft constraint on the linear dependence between the panchromatic and the fused multispectral images. The first one estimates directly the linear coefficients from the observed panchromatic and low resolution multispectral images by Linear Regression (LR) while the second one employs the Principal Component Pursuit (PCP) to obtain a robust recovery of the underlying low-rank structure. We also show that the two regularizers are strongly related. The basic idea of both regularizers is that the fused image should have low-rank and preserve edge locations. We use a variation of the recently proposed

  10. A Regularized Deep Learning Approach for Clinical Risk Prediction of Acute Coronary Syndrome Using Electronic Health Records.

    Science.gov (United States)

    Huang, Zhengxing; Dong, Wei; Duan, Huilong; Liu, Jiquan

    2018-05-01

    Acute coronary syndrome (ACS), as a common and severe cardiovascular disease, is a leading cause of death and the principal cause of serious long-term disability globally. Clinical risk prediction of ACS is important for early intervention and treatment. Existing ACS risk scoring models are based mainly on a small set of hand-picked risk factors and often dichotomize predictive variables to simplify the score calculation. This study develops a regularized stacked denoising autoencoder (SDAE) model to stratify clinical risks of ACS patients from a large volume of electronic health records (EHR). To capture characteristics of patients at similar risk levels, and preserve the discriminating information across different risk levels, two constraints are added on SDAE to make the reconstructed feature representations contain more risk information of patients, which contribute to a better clinical risk prediction result. We validate our approach on a real clinical dataset consisting of 3464 ACS patient samples. The performance of our approach for predicting ACS risk remains robust and reaches 0.868 and 0.73 in terms of both AUC and accuracy, respectively. The obtained results show that the proposed approach achieves a competitive performance compared to state-of-the-art models in dealing with the clinical risk prediction problem. In addition, our approach can extract informative risk factors of ACS via a reconstructive learning strategy. Some of these extracted risk factors are not only consistent with existing medical domain knowledge, but also contain suggestive hypotheses that could be validated by further investigations in the medical domain.

  11. 3D Inversion of Magnetic Data through Wavelet based Regularization Method

    Directory of Open Access Journals (Sweden)

    Maysam Abedi

    2015-06-01

    Full Text Available This study deals with the 3D recovering of magnetic susceptibility model by incorporating the sparsity-based constraints in the inversion algorithm. For this purpose, the area under prospect was divided into a large number of rectangular prisms in a mesh with unknown susceptibilities. Tikhonov cost functions with two sparsity functions were used to recover the smooth parts as well as the sharp boundaries of model parameters. A pre-selected basis namely wavelet can recover the region of smooth behaviour of susceptibility distribution while Haar or finite-difference (FD domains yield a solution with rough boundaries. Therefore, a regularizer function which can benefit from the advantages of both wavelets and Haar/FD operators in representation of the 3D magnetic susceptibility distributionwas chosen as a candidate for modeling magnetic anomalies. The optimum wavelet and parameter β which controls the weight of the two sparsifying operators were also considered. The algorithm assumed that there was no remanent magnetization and observed that magnetometry data represent only induced magnetization effect. The proposed approach is applied to a noise-corrupted synthetic data in order to demonstrate its suitability for 3D inversion of magnetic data. On obtaining satisfactory results, a case study pertaining to the ground based measurement of magnetic anomaly over a porphyry-Cu deposit located in Kerman providence of Iran. Now Chun deposit was presented to be 3D inverted. The low susceptibility in the constructed model coincides with the known location of copper ore mineralization.

  12. A novel framework to alleviate the sparsity problem in context-aware recommender systems

    Science.gov (United States)

    Yu, Penghua; Lin, Lanfen; Wang, Jing

    2017-04-01

    Recommender systems have become indispensable for services in the era of big data. To improve accuracy and satisfaction, context-aware recommender systems (CARSs) attempt to incorporate contextual information into recommendations. Typically, valid and influential contexts are determined in advance by domain experts or feature selection approaches. Most studies have focused on utilizing the unitary context due to the differences between various contexts. Meanwhile, multi-dimensional contexts will aggravate the sparsity problem, which means that the user preference matrix would become extremely sparse. Consequently, there are not enough or even no preferences in most multi-dimensional conditions. In this paper, we propose a novel framework to alleviate the sparsity issue for CARSs, especially when multi-dimensional contextual variables are adopted. Motivated by the intuition that the overall preferences tend to show similarities among specific groups of users and conditions, we first explore to construct one contextual profile for each contextual condition. In order to further identify those user and context subgroups automatically and simultaneously, we apply a co-clustering algorithm. Furthermore, we expand user preferences in a given contextual condition with the identified user and context clusters. Finally, we perform recommendations based on expanded preferences. Extensive experiments demonstrate the effectiveness of the proposed framework.

  13. Multi-energy CT based on a prior rank, intensity and sparsity model (PRISM)

    International Nuclear Information System (INIS)

    Gao, Hao; Osher, Stanley; Yu, Hengyong; Wang, Ge

    2011-01-01

    We propose a compressive sensing approach for multi-energy computed tomography (CT), namely the prior rank, intensity and sparsity model (PRISM). To further compress the multi-energy image for allowing the reconstruction with fewer CT data and less radiation dose, the PRISM models a multi-energy image as the superposition of a low-rank matrix and a sparse matrix (with row dimension in space and column dimension in energy), where the low-rank matrix corresponds to the stationary background over energy that has a low matrix rank, and the sparse matrix represents the rest of distinct spectral features that are often sparse. Distinct from previous methods, the PRISM utilizes the generalized rank, e.g., the matrix rank of tight-frame transform of a multi-energy image, which offers a way to characterize the multi-level and multi-filtered image coherence across the energy spectrum. Besides, the energy-dependent intensity information can be incorporated into the PRISM in terms of the spectral curves for base materials, with which the restoration of the multi-energy image becomes the reconstruction of the energy-independent material composition matrix. In other words, the PRISM utilizes prior knowledge on the generalized rank and sparsity of a multi-energy image, and intensity/spectral characteristics of base materials. Furthermore, we develop an accurate and fast split Bregman method for the PRISM and demonstrate the superior performance of the PRISM relative to several competing methods in simulations. (papers)

  14. l0 Sparsity for Image Denoising with Local and Global Priors

    Directory of Open Access Journals (Sweden)

    Xiaoni Gao

    2015-01-01

    Full Text Available We propose a l0 sparsity based approach to remove additive white Gaussian noise from a given image. To achieve this goal, we combine the local prior and global prior together to recover the noise-free values of pixels. The local prior depends on the neighborhood relationships of a search window to help maintain edges and smoothness. The global prior is generated from a hierarchical l0 sparse representation to help eliminate the redundant information and preserve the global consistency. In addition, to make the correlations between pixels more meaningful, we adopt Principle Component Analysis to measure the similarities, which can be both propitious to reduce the computational complexity and improve the accuracies. Experiments on the benchmark image set show that the proposed approach can achieve superior performance to the state-of-the-art approaches both in accuracy and perception in removing the zero-mean additive white Gaussian noise.

  15. Sparsity-Based Pixel Super Resolution for Lens-Free Digital In-line Holography.

    Science.gov (United States)

    Song, Jun; Leon Swisher, Christine; Im, Hyungsoon; Jeong, Sangmoo; Pathania, Divya; Iwamoto, Yoshiko; Pivovarov, Misha; Weissleder, Ralph; Lee, Hakho

    2016-04-21

    Lens-free digital in-line holography (LDIH) is a promising technology for portable, wide field-of-view imaging. Its resolution, however, is limited by the inherent pixel size of an imaging device. Here we present a new computational approach to achieve sub-pixel resolution for LDIH. The developed method is a sparsity-based reconstruction with the capability to handle the non-linear nature of LDIH. We systematically characterized the algorithm through simulation and LDIH imaging studies. The method achieved the spatial resolution down to one-third of the pixel size, while requiring only single-frame imaging without any hardware modifications. This new approach can be used as a general framework to enhance the resolution in nonlinear holographic systems.

  16. Virtual Seismic Observation (VSO) with Sparsity-Promotion Inversion

    Science.gov (United States)

    Tiezhao, B.; Ning, J.; Jianwei, M.

    2017-12-01

    Large station interval leads to low resolution images, sometimes prevents people from obtaining images in concerned regions. Sparsity-promotion inversion, a useful method to recover missing data in industrial field acquisition, can be lent to interpolate seismic data on none-sampled sites, forming Virtual Seismic Observation (VSO). Traditional sparsity-promotion inversion suffers when coming up with large time difference in adjacent sites, which we concern most and use shift method to improve it. The procedure of the interpolation is that we first employ low-pass filter to get long wavelength waveform data and shift the waveforms of the same wave in different seismograms to nearly same arrival time. Then we use wavelet-transform-based sparsity-promotion inversion to interpolate waveform data on none-sampled sites and filling a phase in each missing trace. Finally, we shift back the waveforms to their original arrival times. We call our method FSIS (Filtering, Shift, Interpolation, Shift) interpolation. By this way, we can insert different virtually observed seismic phases into none-sampled sites and get dense seismic observation data. For testing our method, we randomly hide the real data in a site and use the rest to interpolate the observation on that site, using direct interpolation or FSIS method. Compared with directly interpolated data, interpolated data with FSIS can keep amplitude better. Results also show that the arrival times and waveforms of those VSOs well express the real data, which convince us that our method to form VSOs are applicable. In this way, we can provide needed data for some advanced seismic technique like RTM to illuminate shallow structures.

  17. A New Reverberator Based on Variable Sparsity Convolution

    DEFF Research Database (Denmark)

    Holm-Rasmussen, Bo; Lehtonen, Heidi-Maria; Välimäki, Vesa

    2013-01-01

    FIR filter coefficients are selected from a velvet noise sequence, which consists of ones, minus ones, and zeros only. In this application, it is sufficient perceptually to use very sparse velvet noise sequences having only about 0.1 to 0.2% non-zero elements, with increasing sparsity along...... the impulse response. The algorithm yields a parametric approximation of the late part of the impulse response, which is more than 100 times more efficient computationally than the direct convolution. The computational load of the proposed algorithm is comparable to that of FFT-based partitioned convolution...

  18. A wavelet-based regularized reconstruction algorithm for SENSE parallel MRI with applications to neuroimaging

    International Nuclear Information System (INIS)

    Chaari, L.; Pesquet, J.Ch.; Chaari, L.; Ciuciu, Ph.; Benazza-Benyahia, A.

    2011-01-01

    To reduce scanning time and/or improve spatial/temporal resolution in some Magnetic Resonance Imaging (MRI) applications, parallel MRI acquisition techniques with multiple coils acquisition have emerged since the early 1990's as powerful imaging methods that allow a faster acquisition process. In these techniques, the full FOV image has to be reconstructed from the resulting acquired under sampled k-space data. To this end, several reconstruction techniques have been proposed such as the widely-used Sensitivity Encoding (SENSE) method. However, the reconstructed image generally presents artifacts when perturbations occur in both the measured data and the estimated coil sensitivity profiles. In this paper, we aim at achieving accurate image reconstruction under degraded experimental conditions (low magnetic field and high reduction factor), in which neither the SENSE method nor the Tikhonov regularization in the image domain give convincing results. To this end, we present a novel method for SENSE-based reconstruction which proceeds with regularization in the complex wavelet domain by promoting sparsity. The proposed approach relies on a fast algorithm that enables the minimization of regularized non-differentiable criteria including more general penalties than a classical l 1 term. To further enhance the reconstructed image quality, local convex constraints are added to the regularization process. In vivo human brain experiments carried out on Gradient-Echo (GRE) anatomical and Echo Planar Imaging (EPI) functional MRI data at 1.5 T indicate that our algorithm provides reconstructed images with reduced artifacts for high reduction factors. (authors)

  19. Adaptive regularization

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.

    1994-01-01

    Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient desce...

  20. Improved k-t PCA Algorithm Using Artificial Sparsity in Dynamic MRI.

    Science.gov (United States)

    Wang, Yiran; Chen, Zhifeng; Wang, Jing; Yuan, Lixia; Xia, Ling; Liu, Feng

    2017-01-01

    The k - t principal component analysis ( k - t PCA) is an effective approach for high spatiotemporal resolution dynamic magnetic resonance (MR) imaging. However, it suffers from larger residual aliasing artifacts and noise amplification when the reduction factor goes higher. To further enhance the performance of this technique, we propose a new method called sparse k - t PCA that combines the k - t PCA algorithm with an artificial sparsity constraint. It is a self-calibrated procedure that is based on the traditional k - t PCA method by further eliminating the reconstruction error derived from complex subtraction of the sampled k - t space from the original reconstructed k - t space. The proposed method is tested through both simulations and in vivo datasets with different reduction factors. Compared to the standard k - t PCA algorithm, the sparse k - t PCA can improve the normalized root-mean-square error performance and the accuracy of temporal resolution. It is thus useful for rapid dynamic MR imaging.

  1. Smoothly Clipped Absolute Deviation (SCAD) regularization for compressed sensing MRI Using an augmented Lagrangian scheme

    NARCIS (Netherlands)

    Mehranian, Abolfazl; Rad, Hamidreza Saligheh; Rahmim, Arman; Ay, Mohammad Reza; Zaidi, Habib

    2013-01-01

    Purpose: Compressed sensing (CS) provides a promising framework for MR image reconstruction from highly undersampled data, thus reducing data acquisition time. In this context, sparsity-promoting regularization techniques exploit the prior knowledge that MR images are sparse or compressible in a

  2. A Variational Approach to the Denoising of Images Based on Different Variants of the TV-Regularization

    International Nuclear Information System (INIS)

    Bildhauer, Michael; Fuchs, Martin

    2012-01-01

    We discuss several variants of the TV-regularization model used in image recovery. The proposed alternatives are either of nearly linear growth or even of linear growth, but with some weak ellipticity properties. The main feature of the paper is the investigation of the analytic properties of the corresponding solutions.

  3. Sparse regularization for force identification using dictionaries

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  4. Time domain localization technique with sparsity constraint for imaging acoustic sources

    Science.gov (United States)

    Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain

    2017-09-01

    This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.

  5. How does structured sparsity work in abnormal event detection?

    DEFF Research Database (Denmark)

    Ren, Huamin; Pan, Hong; Olsen, Søren Ingvor

    the training, which is the due to the fact that abnormal videos are limited or even unavailable in advance in most video surveillance applications. As a result, there could be only one label in the training data which hampers supervised learning; 2) Even though there are multiple types of normal behaviors, how...... many normal patterns lie in the whole surveillance data is still unknown. This is because there is huge amount of video surveillance data and only a small proportion is used in algorithm learning, consequently, the normal patterns in the training data could be incomplete. As a result, any sparse...... structure learned from the training data could have a high bias and ruin the precision of abnormal event detection. Therefore, we in the paper propose an algorithm to solve the abnormality detection problem by sparse representation, in which local structured sparsity is preserved in coefficients. To better...

  6. Denoising of Microscopy Images: A Review of the State-of-the-Art, and a New Sparsity-Based Method.

    Science.gov (United States)

    Meiniel, William; Olivo-Marin, Jean-Christophe; Angelini, Elsa D

    2018-08-01

    This paper reviews the state-of-the-art in denoising methods for biological microscopy images and introduces a new and original sparsity-based algorithm. The proposed method combines total variation (TV) spatial regularization, enhancement of low-frequency information, and aggregation of sparse estimators and is able to handle simple and complex types of noise (Gaussian, Poisson, and mixed), without any a priori model and with a single set of parameter values. An extended comparison is also presented, that evaluates the denoising performance of the thirteen (including ours) state-of-the-art denoising methods specifically designed to handle the different types of noises found in bioimaging. Quantitative and qualitative results on synthetic and real images show that the proposed method outperforms the other ones on the majority of the tested scenarios.

  7. On structure-exploiting trust-region regularized nonlinear least squares algorithms for neural-network learning.

    Science.gov (United States)

    Mizutani, Eiji; Demmel, James W

    2003-01-01

    This paper briefly introduces our numerical linear algebra approaches for solving structured nonlinear least squares problems arising from 'multiple-output' neural-network (NN) models. Our algorithms feature trust-region regularization, and exploit sparsity of either the 'block-angular' residual Jacobian matrix or the 'block-arrow' Gauss-Newton Hessian (or Fisher information matrix in statistical sense) depending on problem scale so as to render a large class of NN-learning algorithms 'efficient' in both memory and operation costs. Using a relatively large real-world nonlinear regression application, we shall explain algorithmic strengths and weaknesses, analyzing simulation results obtained by both direct and iterative trust-region algorithms with two distinct NN models: 'multilayer perceptrons' (MLP) and 'complementary mixtures of MLP-experts' (or neuro-fuzzy modular networks).

  8. On the MSE Performance and Optimization of Regularized Problems

    KAUST Repository

    Alrashdi, Ayed

    2016-11-01

    The amount of data that has been measured, transmitted/received, and stored in the recent years has dramatically increased. So, today, we are in the world of big data. Fortunately, in many applications, we can take advantages of possible structures and patterns in the data to overcome the curse of dimensionality. The most well known structures include sparsity, low-rankness, block sparsity. This includes a wide range of applications such as machine learning, medical imaging, signal processing, social networks and computer vision. This also led to a specific interest in recovering signals from noisy compressed measurements (Compressed Sensing (CS) problem). Such problems are generally ill-posed unless the signal is structured. The structure can be captured by a regularizer function. This gives rise to a potential interest in regularized inverse problems, where the process of reconstructing the structured signal can be modeled as a regularized problem. This thesis particularly focuses on finding the optimal regularization parameter for such problems, such as ridge regression, LASSO, square-root LASSO and low-rank Generalized LASSO. Our goal is to optimally tune the regularizer to minimize the mean-squared error (MSE) of the solution when the noise variance or structure parameters are unknown. The analysis is based on the framework of the Convex Gaussian Min-max Theorem (CGMT) that has been used recently to precisely predict performance errors.

  9. Multi-view Multi-sparsity Kernel Reconstruction for Multi-class Image Classification

    KAUST Repository

    Zhu, Xiaofeng; Xie, Qing; Zhu, Yonghua; Liu, Xingyi; Zhang, Shichao

    2015-01-01

    This paper addresses the problem of multi-class image classification by proposing a novel multi-view multi-sparsity kernel reconstruction (MMKR for short) model. Given images (including test images and training images) representing with multiple

  10. A visibility-based approach using regularization for imaging-spectroscopy in solar X-ray astronomy

    Energy Technology Data Exchange (ETDEWEB)

    Prato, M; Massone, A M; Piana, M [CNR - INFM LAMIA, Via Dodecaneso 33 1-16146 Genova (Italy); Emslie, A G [Department of Physics, Oklahoma State University, Stillwater, OK 74078 (United States); Hurford, G J [Space Sciences Laboratory, University of California at Berkeley, 8 Gauss Way, Berkeley, CA 94720-7450 (United States); Kontar, E P [Department of Physics and Astronomy, The University, Glasgow G12 8QQ, Scotland (United Kingdom); Schwartz, R A [CUA - Catholic University and LSSP at NASA Goddard Space Flight Center, code 671.1 Greenbelt, MD 20771 (United States)], E-mail: massone@ge.infm.it

    2008-11-01

    The Reuven Ramaty High-Energy Solar Spectroscopic Imager (RHESSI) is a nine-collimators satellite detecting X-rays and {gamma}-rays emitted by the Sun during flares. As the spacecraft rotates, imaging information is encoded as rapid time-variations of the detected flux. We recently proposed a method for the construction of electron flux maps at different electron energies from sets of count visibilities (i.e., direct, calibrated measurements of specific Fourier components of the source spatial structure) measured by RHESSI. The method requires the application of regularized inversion for the synthesis of electron visibility spectra and of imaging techniques for the reconstruction of two-dimensional electron flux maps. The method, already tested on real events registered by RHESSI, is validated in this paper by means of simulated realistic data.

  11. A Regular k-Shrinkage Thresholding Operator for the Removal of Mixed Gaussian-Impulse Noise

    Directory of Open Access Journals (Sweden)

    Han Pan

    2017-01-01

    Full Text Available The removal of mixed Gaussian-impulse noise plays an important role in many areas, such as remote sensing. However, traditional methods may be unaware of promoting the degree of the sparsity adaptively after decomposing into low rank component and sparse component. In this paper, a new problem formulation with regular spectral k-support norm and regular k-support l1 norm is proposed. A unified framework is developed to capture the intrinsic sparsity structure of all two components. To address the resulting problem, an efficient minimization scheme within the framework of accelerated proximal gradient is proposed. This scheme is achieved by alternating regular k-shrinkage thresholding operator. Experimental comparison with the other state-of-the-art methods demonstrates the efficacy of the proposed method.

  12. Brain source localization: A new method based on MUltiple SIgnal Classification algorithm and spatial sparsity of the field signal for electroencephalogram measurements

    Science.gov (United States)

    Vergallo, P.; Lay-Ekuakille, A.

    2013-08-01

    Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to

  13. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir; Ltaief, Hatem; Mikhalev, Aleksandr; Charara, Ali; Keyes, David E.

    2018-01-01

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  14. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir

    2018-02-24

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  15. Calibrationless Parallel Magnetic Resonance Imaging: A Joint Sparsity Model

    Directory of Open Access Journals (Sweden)

    Angshul Majumdar

    2013-12-01

    Full Text Available State-of-the-art parallel MRI techniques either explicitly or implicitly require certain parameters to be estimated, e.g., the sensitivity map for SENSE, SMASH and interpolation weights for GRAPPA, SPIRiT. Thus all these techniques are sensitive to the calibration (parameter estimation stage. In this work, we have proposed a parallel MRI technique that does not require any calibration but yields reconstruction results that are at par with (or even better than state-of-the-art methods in parallel MRI. Our proposed method required solving non-convex analysis and synthesis prior joint-sparsity problems. This work also derives the algorithms for solving them. Experimental validation was carried out on two datasets—eight channel brain and eight channel Shepp-Logan phantom. Two sampling methods were used—Variable Density Random sampling and non-Cartesian Radial sampling. For the brain data, acceleration factor of 4 was used and for the other an acceleration factor of 6 was used. The reconstruction results were quantitatively evaluated based on the Normalised Mean Squared Error between the reconstructed image and the originals. The qualitative evaluation was based on the actual reconstructed images. We compared our work with four state-of-the-art parallel imaging techniques; two calibrated methods—CS SENSE and l1SPIRiT and two calibration free techniques—Distributed CS and SAKE. Our method yields better reconstruction results than all of them.

  16. UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA

    Directory of Open Access Journals (Sweden)

    IONIŢĂ Elena

    2015-06-01

    Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.

  17. Sparsity-Based Super Resolution for SEM Images.

    Science.gov (United States)

    Tsiper, Shahar; Dicker, Or; Kaizerman, Idan; Zohar, Zeev; Segev, Mordechai; Eldar, Yonina C

    2017-09-13

    The scanning electron microscope (SEM) is an electron microscope that produces an image of a sample by scanning it with a focused beam of electrons. The electrons interact with the atoms in the sample, which emit secondary electrons that contain information about the surface topography and composition. The sample is scanned by the electron beam point by point, until an image of the surface is formed. Since its invention in 1942, the capabilities of SEMs have become paramount in the discovery and understanding of the nanometer world, and today it is extensively used for both research and in industry. In principle, SEMs can achieve resolution better than one nanometer. However, for many applications, working at subnanometer resolution implies an exceedingly large number of scanning points. For exactly this reason, the SEM diagnostics of microelectronic chips is performed either at high resolution (HR) over a small area or at low resolution (LR) while capturing a larger portion of the chip. Here, we employ sparse coding and dictionary learning to algorithmically enhance low-resolution SEM images of microelectronic chips-up to the level of the HR images acquired by slow SEM scans, while considerably reducing the noise. Our methodology consists of two steps: an offline stage of learning a joint dictionary from a sequence of LR and HR images of the same region in the chip, followed by a fast-online super-resolution step where the resolution of a new LR image is enhanced. We provide several examples with typical chips used in the microelectronics industry, as well as a statistical study on arbitrary images with characteristic structural features. Conceptually, our method works well when the images have similar characteristics, as microelectronics chips do. This work demonstrates that employing sparsity concepts can greatly improve the performance of SEM, thereby considerably increasing the scanning throughput without compromising on analysis quality and resolution.

  18. Exploiting sparsity of interconnections in spatio-temporal wind speed forecasting using Wavelet Transform

    International Nuclear Information System (INIS)

    Tascikaraoglu, Akin; Sanandaji, Borhan M.; Poolla, Kameshwar; Varaiya, Pravin

    2016-01-01

    Highlights: • We propose a spatio-temporal approach for wind speed forecasting. • The method is based on a combination of Wavelet decomposition and structured-sparse recovery. • Our analyses confirm that low-dimensional structures govern the interactions between stations. • Our method particularly shows improvements for profiles with high ramps. • We examine our approach on real data and illustrate its superiority over a set of benchmark models. - Abstract: Integration of renewable energy resources into the power grid is essential in achieving the envisioned sustainable energy future. Stochasticity and intermittency characteristics of renewable energies, however, present challenges for integrating these resources into the existing grid in a large scale. Reliable renewable energy integration is facilitated by accurate wind forecasts. In this paper, we propose a novel wind speed forecasting method which first utilizes Wavelet Transform (WT) for decomposition of the wind speed data into more stationary components and then uses a spatio-temporal model on each sub-series for incorporating both temporal and spatial information. The proposed spatio-temporal forecasting approach on each sub-series is based on the assumption that there usually exists an intrinsic low-dimensional structure between time series data in a collection of meteorological stations. Our approach is inspired by Compressive Sensing (CS) and structured-sparse recovery algorithms. Based on detailed case studies, we show that the proposed approach based on exploiting the sparsity of correlations between a large set of meteorological stations and decomposing time series for higher-accuracy forecasts considerably improve the short-term forecasts compared to the temporal and spatio-temporal benchmark methods.

  19. Spatial sparsity based indoor localization in wireless sensor network for assistive healthcare.

    Science.gov (United States)

    Pourhomayoun, Mohammad; Jin, Zhanpeng; Fowler, Mark

    2012-01-01

    Indoor localization is one of the key topics in the area of wireless networks with increasing applications in assistive healthcare, where tracking the position and actions of the patient or elderly are required for medical observation or accident prevention. Most of the common indoor localization methods are based on estimating one or more location-dependent signal parameters like TOA, AOA or RSS. However, some difficulties and challenges caused by the complex scenarios within a closed space significantly limit the applicability of those existing approaches in an indoor assistive environment, such as the well-known multipath effect. In this paper, we develop a new one-stage localization method based on spatial sparsity of the x-y plane. In this method, we directly estimate the location of the emitter without going through the intermediate stage of TOA or signal strength estimation. We evaluate the performance of the proposed method using Monte Carlo simulation. The results show that the proposed method is (i) very accurate even with a small number of sensors and (ii) very effective in addressing the multi-path issues.

  20. Sparc: a sparsity-based consensus algorithm for long erroneous sequencing reads

    Directory of Open Access Journals (Sweden)

    Chengxi Ye

    2016-06-01

    Full Text Available Motivation. The third generation sequencing (3GS technology generates long sequences of thousands of bases. However, its current error rates are estimated in the range of 15–40%, significantly higher than those of the prevalent next generation sequencing (NGS technologies (less than 1%. Fundamental bioinformatics tasks such as de novo genome assembly and variant calling require high-quality sequences that need to be extracted from these long but erroneous 3GS sequences. Results. We describe a versatile and efficient linear complexity consensus algorithm Sparc to facilitate de novo genome assembly. Sparc builds a sparse k-mer graph using a collection of sequences from a targeted genomic region. The heaviest path which approximates the most likely genome sequence is searched through a sparsity-induced reweighted graph as the consensus sequence. Sparc supports using NGS and 3GS data together, which leads to significant improvements in both cost efficiency and computational efficiency. Experiments with Sparc show that our algorithm can efficiently provide high-quality consensus sequences using both PacBio and Oxford Nanopore sequencing technologies. With only 30× PacBio data, Sparc can reach a consensus with error rate <0.5%. With the more challenging Oxford Nanopore data, Sparc can also achieve similar error rate when combined with NGS data. Compared with the existing approaches, Sparc calculates the consensus with higher accuracy, and uses approximately 80% less memory and time. Availability. The source code is available for download at https://github.com/yechengxi/Sparc.

  1. An Adaptive Joint Sparsity Recovery for Compressive Sensing Based EEG System

    Directory of Open Access Journals (Sweden)

    Hamza Djelouat

    2017-01-01

    Full Text Available The last decade has witnessed tremendous efforts to shape the Internet of things (IoT platforms to be well suited for healthcare applications. These platforms are comprised of a network of wireless sensors to monitor several physical and physiological quantities. For instance, long-term monitoring of brain activities using wearable electroencephalogram (EEG sensors is widely exploited in the clinical diagnosis of epileptic seizures and sleeping disorders. However, the deployment of such platforms is challenged by the high power consumption and system complexity. Energy efficiency can be achieved by exploring efficient compression techniques such as compressive sensing (CS. CS is an emerging theory that enables a compressed acquisition using well-designed sensing matrices. Moreover, system complexity can be optimized by using hardware friendly structured sensing matrices. This paper quantifies the performance of a CS-based multichannel EEG monitoring. In addition, the paper exploits the joint sparsity of multichannel EEG using subspace pursuit (SP algorithm as well as a designed sparsifying basis in order to improve the reconstruction quality. Furthermore, the paper proposes a modification to the SP algorithm based on an adaptive selection approach to further improve the performance in terms of reconstruction quality, execution time, and the robustness of the recovery process.

  2. Global mapping of stratigraphy of an old-master painting using sparsity-based terahertz reflectometry.

    Science.gov (United States)

    Dong, Junliang; Locquet, Alexandre; Melis, Marcello; Citrin, D S

    2017-11-08

    The process by which art paintings are produced typically involves the successive applications of preparatory and paint layers to a canvas or other support; however, there is an absence of nondestructive modalities to provide a global mapping of the stratigraphy, information that is crucial for evaluation of its authenticity and attribution, for insights into historical or artist-specific techniques, as well as for conservation. We demonstrate sparsity-based terahertz reflectometry can be applied to extract a detailed 3D mapping of the layer structure of the 17th century easel painting Madonna in Preghiera by the workshop of Giovanni Battista Salvi da Sassoferrato, in which the structure of the canvas support, the ground, imprimatura, underpainting, pictorial, and varnish layers are identified quantitatively. In addition, a hitherto unidentified restoration of the varnish has been found. Our approach unlocks the full promise of terahertz reflectometry to provide a global and detailed account of an easel painting's stratigraphy by exploiting the sparse deconvolution, without which terahertz reflectometry in the past has only provided a meager tool for the characterization of paintings with paint-layer thicknesses smaller than 50 μm. The proposed modality can also be employed across a broad range of applications in nondestructive testing and biomedical imaging.

  3. Universal Regularizers For Robust Sparse Coding and Modeling

    OpenAIRE

    Ramirez, Ignacio; Sapiro, Guillermo

    2010-01-01

    Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. It is now well understood that the choice of the sparsity regularization term is critical in the success of such models. Based on a codelength minimization interpretation of sparse coding, and using tools from universal coding...

  4. Leveraging EAP-Sparsity for Compressed Sensing of MS-HARDI in (k, q)-Space.

    Science.gov (United States)

    Sun, Jiaqi; Sakhaee, Elham; Entezari, Alireza; Vemuri, Baba C

    2015-01-01

    Compressed Sensing (CS) for the acceleration of MR scans has been widely investigated in the past decade. Lately, considerable progress has been made in achieving similar speed ups in acquiring multi-shell high angular resolution diffusion imaging (MS-HARDI) scans. Existing approaches in this context were primarily concerned with sparse reconstruction of the diffusion MR signal S(q) in the q-space. More recently, methods have been developed to apply the compressed sensing framework to the 6-dimensional joint (k, q)-space, thereby exploiting the redundancy in this 6D space. To guarantee accurate reconstruction from partial MS-HARDI data, the key ingredients of compressed sensing that need to be brought together are: (1) the function to be reconstructed needs to have a sparse representation, and (2) the data for reconstruction ought to be acquired in the dual domain (i.e., incoherent sensing) and (3) the reconstruction process involves a (convex) optimization. In this paper, we present a novel approach that uses partial Fourier sensing in the 6D space of (k, q) for the reconstruction of P(x, r). The distinct feature of our approach is a sparsity model that leverages surfacelets in conjunction with total variation for the joint sparse representation of P(x, r). Thus, our method stands to benefit from the practical guarantees for accurate reconstruction from partial (k, q)-space data. Further, we demonstrate significant savings in acquisition time over diffusion spectral imaging (DSI) which is commonly used as the benchmark for comparisons in reported literature. To demonstrate the benefits of this approach,.we present several synthetic and real data examples.

  5. A non-statistical regularization approach and a tensor product decomposition method applied to complex flow data

    Science.gov (United States)

    von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin

    2016-04-01

    Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I

  6. HYBRID APPROACHES TO THE FORMALISATION OF EXPERT KNOWLEDGE CONCERNING TEMPORAL REGULARITIES IN THE TIME SERIES GROUP OF A SYSTEM MONITORING DATABASE

    Directory of Open Access Journals (Sweden)

    E. S. Staricov

    2016-01-01

    Full Text Available Objectives. The presented research problem concerns data regularities for an unspecified time series based on an approach to the expert formalisation of knowledge integrated into a decision-making mechanism. Method. A context-free grammar, consisting of a modification of universal temporal grammar, is used to describe regularities. Using the rules of the developed grammar, an expert can describe patterns in the group of time series. A multi-dimensional matrix pattern of the behaviour of a group of time series is used in a real-time decision-making regime in the expert system to implements a universal approach to the description of the dynamics of these changes in the expert system. The multidimensional matrix pattern is specifically intended for decision-making in an expert system; the modified temporal grammar is used to identify patterns in the data. Results. It is proposed to use the temporal relations of the series and fix observation values in the time interval as ―From-To‖, ―Before‖, ―After‖, ―Simultaneously‖ and ―Duration‖. A syntactically oriented converter of descriptions is developed. A schema for the creation and application of matrix patterns in expert systems is drawn up. Conclusion. The advantage of the implementation of the proposed hybrid approaches consists in a reduction of the time taken for identifying temporal patterns and an automation of the matrix pattern of the decision-making system based on expert descriptions verified using live data derived from relationships in the monitoring data. 

  7. Coordinate-invariant regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-01-01

    A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc

  8. Exploiting Data Sparsity In Covariance Matrix Computations on Heterogeneous Systems

    KAUST Repository

    Charara, Ali M.

    2018-05-24

    Covariance matrices are ubiquitous in computational sciences, typically describing the correlation of elements of large multivariate spatial data sets. For example, covari- ance matrices are employed in climate/weather modeling for the maximum likelihood estimation to improve prediction, as well as in computational ground-based astronomy to enhance the observed image quality by filtering out noise produced by the adap- tive optics instruments and atmospheric turbulence. The structure of these covariance matrices is dense, symmetric, positive-definite, and often data-sparse, therefore, hier- archically of low-rank. This thesis investigates the performance limit of dense matrix computations (e.g., Cholesky factorization) on covariance matrix problems as the number of unknowns grows, and in the context of the aforementioned applications. We employ recursive formulations of some of the basic linear algebra subroutines (BLAS) to accelerate the covariance matrix computation further, while reducing data traffic across the memory subsystems layers. However, dealing with large data sets (i.e., covariance matrices of billions in size) can rapidly become prohibitive in memory footprint and algorithmic complexity. Most importantly, this thesis investigates the tile low-rank data format (TLR), a new compressed data structure and layout, which is valuable in exploiting data sparsity by approximating the operator. The TLR com- pressed data structure allows approximating the original problem up to user-defined numerical accuracy. This comes at the expense of dealing with tasks with much lower arithmetic intensities than traditional dense computations. In fact, this thesis con- solidates the two trends of dense and data-sparse linear algebra for HPC. Not only does the thesis leverage recursive formulations for dense Cholesky-based matrix al- gorithms, but it also implements a novel TLR-Cholesky factorization using batched linear algebra operations to increase hardware occupancy and

  9. Regularizing portfolio optimization

    International Nuclear Information System (INIS)

    Still, Susanne; Kondor, Imre

    2010-01-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  10. Regularizing portfolio optimization

    Science.gov (United States)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  11. Empirical average-case relation between undersampling and sparsity in X-ray CT

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Sauer; Sidky, Emil Y.; Hansen, Per Christian

    2015-01-01

    In X-ray computed tomography (CT) it is generally acknowledged that reconstruction methods exploiting image sparsity allow reconstruction from a significantly reduced number of projections. The use of such reconstruction methods is inspired by recent progress in compressed sensing (CS). However......, the CS framework provides neither guarantees of accurate CT reconstruction, nor any relation between sparsity and a sufficient number of measurements for recovery, i.e., perfect reconstruction from noise-free data. We consider reconstruction through 1-norm minimization, as proposed in CS, from data...... obtained using a standard CT fan-beam sampling pattern. In empirical simulation studies we establish quantitatively a relation between the image sparsity and the sufficient number of measurements for recovery within image classes motivated by tomographic applications. We show empirically that the specific...

  12. Near-field acoustic holography using sparse regularization and compressive sampling principles.

    Science.gov (United States)

    Chardon, Gilles; Daudet, Laurent; Peillot, Antoine; Ollivier, François; Bertin, Nancy; Gribonval, Rémi

    2012-09-01

    Regularization of the inverse problem is a complex issue when using near-field acoustic holography (NAH) techniques to identify the vibrating sources. This paper shows that, for convex homogeneous plates with arbitrary boundary conditions, alternative regularization schemes can be developed based on the sparsity of the normal velocity of the plate in a well-designed basis, i.e., the possibility to approximate it as a weighted sum of few elementary basis functions. In particular, these techniques can handle discontinuities of the velocity field at the boundaries, which can be problematic with standard techniques. This comes at the cost of a higher computational complexity to solve the associated optimization problem, though it remains easily tractable with out-of-the-box software. Furthermore, this sparsity framework allows us to take advantage of the concept of compressive sampling; under some conditions on the sampling process (here, the design of a random array, which can be numerically and experimentally validated), it is possible to reconstruct the sparse signals with significantly less measurements (i.e., microphones) than classically required. After introducing the different concepts, this paper presents numerical and experimental results of NAH with two plate geometries, and compares the advantages and limitations of these sparsity-based techniques over standard Tikhonov regularization.

  13. Sparsity-Based Representation for Classification Algorithms and Comparison Results for Transient Acoustic Signals

    Science.gov (United States)

    2016-05-01

    Powder Mill Road Adelphi, MD 20783-1138 Authors’ emails : <ducminh174@gmail.com> , <tung-duong.tran-luu.civ@mail.mil>. In this report, we propose a...to share common sparsity patterns. Yuan and Yan7 investigated a multitask model for visual classifi- cation, which also assumes that multiple

  14. Spatiotemporal Fusion of Remote Sensing Images with Structural Sparsity and Semi-Coupled Dictionary Learning

    Directory of Open Access Journals (Sweden)

    Jingbo Wei

    2016-12-01

    Full Text Available Fusion of remote sensing images with different spatial and temporal resolutions is highly needed by diverse earth observation applications. A small number of spatiotemporal fusion methods using sparse representation appear to be more promising than traditional linear mixture methods in reflecting abruptly changing terrestrial content. However, one of the main difficulties is that the results of sparse representation have reduced expressional accuracy; this is due in part to insufficient prior knowledge. For remote sensing images, the cluster and joint structural sparsity of the sparse coefficients could be employed as a priori knowledge. In this paper, a new optimization model is constructed with the semi-coupled dictionary learning and structural sparsity to predict the unknown high-resolution image from known images. Specifically, the intra-block correlation and cluster-structured sparsity are considered for single-channel reconstruction, and the inter-band similarity of joint-structured sparsity is considered for multichannel reconstruction, and both are implemented with block sparse Bayesian learning. The detailed optimization steps are given iteratively. In the experimental procedure, the red, green, and near-infrared bands of Landsat-7 and Moderate Resolution Imaging Spectrometer (MODIS satellites are put to fusion with root mean square errors to check the prediction accuracy. It can be concluded from the experiment that the proposed methods can produce higher quality than state-of-the-art methods.

  15. Sparsity- and continuity-promoting seismic image recovery with curvelet frames

    NARCIS (Netherlands)

    Herrmann, Felix J.; Moghaddam, Peyman; Stolk, C.C.

    2008-01-01

    A nonlinear singularity-preserving solution to seismic image recovery with sparseness and continuity constraints is proposed. We observe that curvelets, as a directional frame expansion, lead to sparsity of seismic images and exhibit invariance under the normal operator of the linearized imaging

  16. 3D reconstruction for partial data electrical impedance tomography using a sparsity prior

    DEFF Research Database (Denmark)

    Garde, Henrik; Knudsen, Kim

    2015-01-01

    of the conductivity is used to improve reconstructions for the partial data problem with Cauchy data measured only on a subset of the boundary. A sparsity prior is enforced using the ℓ1 norm in the penalty term of a Tikhonov functional, and spatial prior information is incorporated by applying a spatially distributed...

  17. The Linearized Bregman Method for Frugal Full-waveform Inversion with Compressive Sensing and Sparsity-promoting

    Science.gov (United States)

    Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong

    2018-03-01

    Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.

  18. Application of Turchin's method of statistical regularization

    Science.gov (United States)

    Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey

    2018-04-01

    During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.

  19. A two-way regularization method for MEG source reconstruction

    KAUST Repository

    Tian, Tian Siva; Huang, Jianhua Z.; Shen, Haipeng; Li, Zhimin

    2012-01-01

    The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.

  20. A two-way regularization method for MEG source reconstruction

    KAUST Repository

    Tian, Tian Siva

    2012-09-01

    The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.

  1. Detecting Activation in fMRI Data: An Approach Based on Sparse Representation of BOLD Signal

    Directory of Open Access Journals (Sweden)

    Blanca Guillen

    2018-01-01

    Full Text Available This paper proposes a simple yet effective approach for detecting activated voxels in fMRI data by exploiting the inherent sparsity property of the BOLD signal in temporal and spatial domains. In the time domain, the approach combines the General Linear Model (GLM with a Least Absolute Deviation (LAD based regression method regularized by the pseudonorm l0 to promote sparsity in the parameter vector of the model. In the spatial domain, detection of activated regions is based on thresholding the spatial map of estimated parameters associated with a particular stimulus. The threshold is calculated by exploiting the sparseness of the BOLD signal in the spatial domain assuming a Laplacian distribution model. The proposed approach is validated using synthetic and real fMRI data. For synthetic data, results show that the proposed approach is able to detect most activated voxels without any false activation. For real data, the method is evaluated through comparison with the SPM software. Results indicate that this approach can effectively find activated regions that are similar to those found by SPM, but using a much simpler approach. This study may lead to the development of robust spatial approaches to further simplifying the complexity of classical schemes.

  2. The Air Traffic Controller Work-Shift Scheduling Problem in Spain from a Multiobjective Perspective: A Metaheuristic and Regular Expression-Based Approach

    Directory of Open Access Journals (Sweden)

    Faustino Tello

    2018-01-01

    Full Text Available We address an air traffic control operator (ATCo work-shift scheduling problem. We consider a multiple objective perspective where the number of ATCos is fixed in advance and a set of ATCo labor conditions have to be satisfied. The objectives deal with the ATCo work and rest periods and positions, the structure of the solution, the number of control center changes, or the distribution of the ATCo workloads. We propose a three-phase problem-solving methodology. In the first phase, a heuristic is used to derive infeasible initial solutions on the basis of templates. Then, a multiple independent run of the simulated annealing metaheuristic is conducted aimed at reaching feasible solutions in the second phase. Finally, a multiple independent simulated annealing run is again conducted from the initial feasible solutions to optimize the objective functions. To do this, we transform the multiple to single optimization problem by using the rank-order centroid function. In the search processes in phases 2 and 3, we use regular expressions to check the ATCo labor conditions in the visited solutions. This provides high testing speed. The proposed approach is illustrated using a real example, and the optimal solution which is reached outperforms an existing template-based reference solution.

  3. A formal power series expansion-regularization approach for Lévy stable distributions: the symmetric case with \\alpha =2/M (M positive integer)

    Science.gov (United States)

    Crisanto-Neto, J. C.; da Luz, M. G. E.; Raposo, E. P.; Viswanathan, G. M.

    2016-09-01

    In practice, the Lévy α-stable distribution is usually expressed in terms of the Fourier integral of its characteristic function. Indeed, known closed form expressions are relatively scarce given the huge parameters space: 0\\lt α ≤slant 2 ({{L\\'{e}vy}} {{index}}), -1≤slant β ≤slant 1 ({{skewness}}),σ \\gt 0 ({{scale}}), and -∞ \\lt μ \\lt ∞ ({{shift}}). Hence, systematic efforts have been made towards the development of proper methods for analytically solving the mentioned integral. As a further contribution in this direction, here we propose a new way to tackle the problem. We consider an approach in which one first solves the Fourier integral through a formal (thus not necessarily convergent) series representation. Then, one uses (if necessary) a pertinent sum-regularization procedure to the resulting divergent series, so as to obtain an exact formula for the distribution, which is amenable to direct numerical calculations. As a concrete study, we address the centered, symmetric, unshifted and unscaled distribution (β =0, μ =0, σ =1), with α ={α }M=2/M, M=1,2,3\\ldots . Conceivably, the present protocol could be applied to other sets of parameter values.

  4. Selective Laser Melting: a regular unit cell approach for the manufacture of porous, titanium, bone in-growth constructs, suitable for orthopedic applications.

    Science.gov (United States)

    Mullen, Lewis; Stamp, Robin C; Brooks, Wesley K; Jones, Eric; Sutcliffe, Christopher J

    2009-05-01

    In this study, a novel porous titanium structure for the purpose of bone in-growth has been designed, manufactured and evaluated. The structure was produced by Selective Laser Melting (SLM); a rapid manufacturing process capable of producing highly intricate, functionally graded parts. The technique described utilizes an approach based on a defined regular unit cell to design and produce structures with a large range of both physical and mechanical properties. These properties can be tailored to suit specific requirements; in particular, functionally graded structures with bone in-growth surfaces exhibiting properties comparable to those of human bone have been manufactured. The structures were manufactured and characterized by unit cell size, strand diameter, porosity, and compression strength. They exhibited a porosity (10-95%) dependant compression strength (0.5-350 Mpa) comparable to the typical naturally occurring range. It is also demonstrated that optimized structures have been produced that possesses ideal qualities for bone in-growth applications and that these structures can be applied in the production of orthopedic devices. (c) 2008 Wiley Periodicals, Inc.

  5. An analysis of electrical impedance tomography with applications to Tikhonov regularization

    KAUST Repository

    Jin, Bangti

    2012-01-16

    This paper analyzes the continuum model/complete electrode model in the electrical impedance tomography inverse problem of determining the conductivity parameter from boundary measurements. The continuity and differentiability of the forward operator with respect to the conductivity parameter in L p-norms are proved. These analytical results are applied to several popular regularization formulations, which incorporate a priori information of smoothness/sparsity on the inhomogeneity through Tikhonov regularization, for both linearized and nonlinear models. Some important properties, e.g., existence, stability, consistency and convergence rates, are established. This provides some theoretical justifications of their practical usage. © EDP Sciences, SMAI, 2012.

  6. An analysis of electrical impedance tomography with applications to Tikhonov regularization

    KAUST Repository

    Jin, Bangti; Maass, Peter

    2012-01-01

    This paper analyzes the continuum model/complete electrode model in the electrical impedance tomography inverse problem of determining the conductivity parameter from boundary measurements. The continuity and differentiability of the forward operator with respect to the conductivity parameter in L p-norms are proved. These analytical results are applied to several popular regularization formulations, which incorporate a priori information of smoothness/sparsity on the inhomogeneity through Tikhonov regularization, for both linearized and nonlinear models. Some important properties, e.g., existence, stability, consistency and convergence rates, are established. This provides some theoretical justifications of their practical usage. © EDP Sciences, SMAI, 2012.

  7. Manifold optimization-based analysis dictionary learning with an ℓ1∕2-norm regularizer.

    Science.gov (United States)

    Li, Zhenni; Ding, Shuxue; Li, Yujie; Yang, Zuyuan; Xie, Shengli; Chen, Wuhui

    2018-02-01

    Recently there has been increasing attention towards analysis dictionary learning. In analysis dictionary learning, it is an open problem to obtain the strong sparsity-promoting solutions efficiently while simultaneously avoiding the trivial solutions of the dictionary. In this paper, to obtain the strong sparsity-promoting solutions, we employ the ℓ 1∕2 norm as a regularizer. The very recent study on ℓ 1∕2 norm regularization theory in compressive sensing shows that its solutions can give sparser results than using the ℓ 1 norm. We transform a complex nonconvex optimization into a number of one-dimensional minimization problems. Then the closed-form solutions can be obtained efficiently. To avoid trivial solutions, we apply manifold optimization to update the dictionary directly on the manifold satisfying the orthonormality constraint, so that the dictionary can avoid the trivial solutions well while simultaneously capturing the intrinsic properties of the dictionary. The experiments with synthetic and real-world data verify that the proposed algorithm for analysis dictionary learning can not only obtain strong sparsity-promoting solutions efficiently, but also learn more accurate dictionary in terms of dictionary recovery and image processing than the state-of-the-art algorithms. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. TH-EF-BRB-05: 4pi Non-Coplanar IMRT Beam Angle Selection by Convex Optimization with Group Sparsity Penalty

    International Nuclear Information System (INIS)

    O’Connor, D; Nguyen, D; Voronenko, Y; Yin, W; Sheng, K

    2016-01-01

    Purpose: Integrated beam orientation and fluence map optimization is expected to be the foundation of robust automated planning but existing heuristic methods do not promise global optimality. We aim to develop a new method for beam angle selection in 4π non-coplanar IMRT systems based on solving (globally) a single convex optimization problem, and to demonstrate the effectiveness of the method by comparison with a state of the art column generation method for 4π beam angle selection. Methods: The beam angle selection problem is formulated as a large scale convex fluence map optimization problem with an additional group sparsity term that encourages most candidate beams to be inactive. The optimization problem is solved using an accelerated first-order method, the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The beam angle selection and fluence map optimization algorithm is used to create non-coplanar 4π treatment plans for several cases (including head and neck, lung, and prostate cases) and the resulting treatment plans are compared with 4π treatment plans created using the column generation algorithm. Results: In our experiments the treatment plans created using the group sparsity method meet or exceed the dosimetric quality of plans created using the column generation algorithm, which was shown superior to clinical plans. Moreover, the group sparsity approach converges in about 3 minutes in these cases, as compared with runtimes of a few hours for the column generation method. Conclusion: This work demonstrates the first non-greedy approach to non-coplanar beam angle selection, based on convex optimization, for 4π IMRT systems. The method given here improves both treatment plan quality and runtime as compared with a state of the art column generation algorithm. When the group sparsity term is set to zero, we obtain an excellent method for fluence map optimization, useful when beam angles have already been selected. NIH R43CA183390, NIH R01CA

  9. Modularity and Sparsity: Evolution of Neural Net Controllers in Physically Embodied Robots

    Directory of Open Access Journals (Sweden)

    Nicholas Livingston

    2016-12-01

    Full Text Available While modularity is thought to be central for the evolution of complexity and evolvability, it remains unclear how systems boot-strap themselves into modularity from random or fully integrated starting conditions. Clune et al. (2013 suggested that a positive correlation between sparsity and modularity is the prime cause of this transition. We sought to test the generality of this modularity-sparsity hypothesis by testing it for the first time in physically embodied robots. A population of ten Tadros — autonomous, surface-swimming robots propelled by a flapping tail — was used. Individuals varied only in the structure of their neural net control, a 2 x 6 x 2 network with recurrence in the hidden layer. Each of the 60 possible connections was coded in the genome, and could achieve one of three states: -1, 0, 1. Inputs were two light-dependent resistors and outputs were two motor control variables to the flapping tail, one for the frequency of the flapping and the other for the turning offset. Each Tadro was tested separately in a circular tank lit by a single overhead light source. Fitness was the amount of light gathered by a vertically oriented sensor that was disconnected from the controller net. Reproduction was asexual, with the top performer cloned and then all individuals entered into a roulette wheel selection process, with genomes mutated to create the offspring. The starting population of networks was randomly generated. Over ten generations, the population’s mean fitness increased two-fold. This evolution occurred in spite of an unintentional integer overflow problem in recurrent nodes in the hidden layer that caused outputs to oscillate. Our investigation of the oscillatory behavior showed that the mutual information of inputs and outputs was sufficient for the reactive behaviors observed. While we had predicted that both modularity and sparsity would follow the same trend as fitness, neither did so. Instead, selection gradients

  10. Distance-regular graphs

    NARCIS (Netherlands)

    van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime

    2016-01-01

    This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,

  11. LL-regular grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    1980-01-01

    Culik II and Cogen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this paper we consider an analogous extension of the LL(k) grammars called the LL-regular grammars. The relation of this class of grammars to other classes of grammars will be shown. Any LL-regular

  12. Multi-Index Monte Carlo (MIMC) When sparsity meets sampling

    KAUST Repository

    Tempone, Raul

    2015-01-01

    This talk focuses into our newest method: Multi Index Monte Carlo (MIMC). The MIMC method uses a stochastic combination technique to solve the given approximation problem, generalizing the notion of standard MLMC levels into a set of multi indices that should be properly chosen to exploit the available regularity. Indeed, instead of using first-order differences as in standard MLMC, MIMC uses high-order differences to reduce the variance of the hierarchical differences dramatically. This in turn gives a new improved complexity result that increases the domain of the problem parameters for which the method achieves the optimal convergence rate, O(TOL-2). Using optimal index sets that we determined, MIMC achieves a better rate for the computational complexity does not depend on the dimensionality of the underlying problem, up to logarithmic factors. We present numerical results related to a three dimensional PDE with random coefficients to substantiate some of the derived computational complexity rates. Finally, using the Lindeberg-Feller theorem, we also show the asymptotic normality of the statistical error in the MIMC estimator and justify in this way our error estimate that allows prescribing both the required accuracy and confidence in the final result

  13. Multi-Index Monte Carlo (MIMC) When sparsity meets sampling

    KAUST Repository

    Tempone, Raul

    2015-01-07

    This talk focuses into our newest method: Multi Index Monte Carlo (MIMC). The MIMC method uses a stochastic combination technique to solve the given approximation problem, generalizing the notion of standard MLMC levels into a set of multi indices that should be properly chosen to exploit the available regularity. Indeed, instead of using first-order differences as in standard MLMC, MIMC uses high-order differences to reduce the variance of the hierarchical differences dramatically. This in turn gives a new improved complexity result that increases the domain of the problem parameters for which the method achieves the optimal convergence rate, O(TOL-2). Using optimal index sets that we determined, MIMC achieves a better rate for the computational complexity does not depend on the dimensionality of the underlying problem, up to logarithmic factors. We present numerical results related to a three dimensional PDE with random coefficients to substantiate some of the derived computational complexity rates. Finally, using the Lindeberg-Feller theorem, we also show the asymptotic normality of the statistical error in the MIMC estimator and justify in this way our error estimate that allows prescribing both the required accuracy and confidence in the final result

  14. Distribution Agnostic Structured Sparsity Recovery: Algorithms and Applications

    KAUST Repository

    Masood, Mudassir

    2015-01-01

    about the data. While Bayesian approaches have their advantages, these methods must have access to a priori statistics. Usually, these statistics are unknown and are often difficult or even impossible to predict. An effective workaround is to assume a

  15. Multiscale mining of fMRI data with hierarchical structured sparsity

    International Nuclear Information System (INIS)

    Jenatton, R.; Obozinski, G.; Bach, F.; Gramfort, Alexandre; Michel, Vincent; Thirion, Bertrand; Eger, Evelyne

    2012-01-01

    Reverse inference, or 'brain reading', is a recent paradigm for analyzing functional magnetic resonance imaging (fMRI) data, based on pattern recognition and statistical learning. By predicting some cognitive variables related to brain activation maps, this approach aims at decoding brain activity. Reverse inference takes into account the multivariate information between voxels and is currently the only way to assess how precisely some cognitive information is encoded by the activity of neural populations within the whole brain. However, it relies on a prediction function that is plagued by the curse of dimensionality, since there are far more features than samples, i.e., more voxels than fMRI volumes. To address this problem, different methods have been proposed, such as, among others, univariate feature selection, feature agglomeration and regularization techniques. In this paper, we consider a sparse hierarchical structured regularization. Specifically, the penalization we use is constructed from a tree that is obtained by spatially-constrained agglomerative clustering. This approach encodes the spatial structure of the data at different scales into the regularization, which makes the overall prediction procedure more robust to inter-subject variability. The regularization used induces the selection of spatially coherent predictive brain regions simultaneously at different scales. We test our algorithm on real data acquired to study the mental representation of objects, and we show that the proposed algorithm not only delineates meaningful brain regions but yields as well better prediction accuracy than reference methods. (authors)

  16. Adaptive OFDM Waveform Design for Spatio-Temporal-Sparsity Exploited STAP Radar

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Satyabrata [ORNL

    2017-11-01

    In this chapter, we describe a sparsity-based space-time adaptive processing (STAP) algorithm to detect a slowly moving target using an orthogonal frequency division multiplexing (OFDM) radar. The motivation of employing an OFDM signal is that it improves the target-detectability from the interfering signals by increasing the frequency diversity of the system. However, due to the addition of one extra dimension in terms of frequency, the adaptive degrees-of-freedom in an OFDM-STAP also increases. Therefore, to avoid the construction a fully adaptive OFDM-STAP, we develop a sparsity-based STAP algorithm. We observe that the interference spectrum is inherently sparse in the spatio-temporal domain, as the clutter responses occupy only a diagonal ridge on the spatio-temporal plane and the jammer signals interfere only from a few spatial directions. Hence, we exploit that sparsity to develop an efficient STAP technique that utilizes considerably lesser number of secondary data compared to the other existing STAP techniques, and produces nearly optimum STAP performance. In addition to designing the STAP filter, we optimally design the transmit OFDM signals by maximizing the output signal-to-interference-plus-noise ratio (SINR) in order to improve the STAP performance. The computation of output SINR depends on the estimated value of the interference covariance matrix, which we obtain by applying the sparse recovery algorithm. Therefore, we analytically assess the effects of the synthesized OFDM coefficients on the sparse recovery of the interference covariance matrix by computing the coherence measure of the sparse measurement matrix. Our numerical examples demonstrate the achieved STAP-performance due to sparsity-based technique and adaptive waveform design.

  17. Regular Expression Pocket Reference

    CERN Document Server

    Stubblebine, Tony

    2007-01-01

    This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp

  18. Deep neural network with weight sparsity control and pre-training extracts hierarchical features and enhances classification performance: Evidence from whole-brain resting-state functional connectivity patterns of schizophrenia.

    Science.gov (United States)

    Kim, Junghoe; Calhoun, Vince D; Shim, Eunsoo; Lee, Jong-Hwan

    2016-01-01

    Functional connectivity (FC) patterns obtained from resting-state functional magnetic resonance imaging data are commonly employed to study neuropsychiatric conditions by using pattern classifiers such as the support vector machine (SVM). Meanwhile, a deep neural network (DNN) with multiple hidden layers has shown its ability to systematically extract lower-to-higher level information of image and speech data from lower-to-higher hidden layers, markedly enhancing classification accuracy. The objective of this study was to adopt the DNN for whole-brain resting-state FC pattern classification of schizophrenia (SZ) patients vs. healthy controls (HCs) and identification of aberrant FC patterns associated with SZ. We hypothesized that the lower-to-higher level features learned via the DNN would significantly enhance the classification accuracy, and proposed an adaptive learning algorithm to explicitly control the weight sparsity in each hidden layer via L1-norm regularization. Furthermore, the weights were initialized via stacked autoencoder based pre-training to further improve the classification performance. Classification accuracy was systematically evaluated as a function of (1) the number of hidden layers/nodes, (2) the use of L1-norm regularization, (3) the use of the pre-training, (4) the use of framewise displacement (FD) removal, and (5) the use of anatomical/functional parcellation. Using FC patterns from anatomically parcellated regions without FD removal, an error rate of 14.2% was achieved by employing three hidden layers and 50 hidden nodes with both L1-norm regularization and pre-training, which was substantially lower than the error rate from the SVM (22.3%). Moreover, the trained DNN weights (i.e., the learned features) were found to represent the hierarchical organization of aberrant FC patterns in SZ compared with HC. Specifically, pairs of nodes extracted from the lower hidden layer represented sparse FC patterns implicated in SZ, which was

  19. Improved l1-SPIRiT using 3D walsh transform-based sparsity basis.

    Science.gov (United States)

    Feng, Zhen; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart; Guo, He; Wang, Yuxin

    2014-09-01

    l1-SPIRiT is a fast magnetic resonance imaging (MRI) method which combines parallel imaging (PI) with compressed sensing (CS) by performing a joint l1-norm and l2-norm optimization procedure. The original l1-SPIRiT method uses two-dimensional (2D) Wavelet transform to exploit the intra-coil data redundancies and a joint sparsity model to exploit the inter-coil data redundancies. In this work, we propose to stack all the coil images into a three-dimensional (3D) matrix, and then a novel 3D Walsh transform-based sparsity basis is applied to simultaneously reduce the intra-coil and inter-coil data redundancies. Both the 2D Wavelet transform-based and the proposed 3D Walsh transform-based sparsity bases were investigated in the l1-SPIRiT method. The experimental results show that the proposed 3D Walsh transform-based l1-SPIRiT method outperformed the original l1-SPIRiT in terms of image quality and computational efficiency. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Sparsity-Based Space-Time Adaptive Processing Using OFDM Radar

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Satyabrata [ORNL

    2012-01-01

    We propose a sparsity-based space-time adaptive processing (STAP) algorithm to detect a slowly-moving target using an orthogonal frequency division multiplexing (OFDM) radar. We observe that the target and interference spectra are inherently sparse in the spatio-temporal domain, and hence we exploit that sparsity to develop an efficient STAP technique. In addition, the use of an OFDM signal increases the frequency diversity of our system, as different scattering centers of a target resonate at different frequencies, and thus improves the target detectability. First, we formulate a realistic sparse-measurement model for an OFDM radar considering both the clutter and jammer as the interfering sources. Then, we show that the optimal STAP-filter weight-vector is equal to the generalized eigenvector corresponding to the minimum generalized eigenvalue of the interference and target covariance matrices. To estimate the target and interference covariance matrices, we apply a residual sparse-recovery technique that enables us to incorporate the partially known support of the sparse vector. Our numerical results demonstrate that the sparsity-based STAP algorithm, with considerably lesser number of secondary data, produces an equivalent performance as the other existing STAP techniques.

  1. Joint seismic data denoising and interpolation with double-sparsity dictionary learning

    Science.gov (United States)

    Zhu, Lingchen; Liu, Entao; McClellan, James H.

    2017-08-01

    Seismic data quality is vital to geophysical applications, so that methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double-sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data, while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the data set without introducing pseudo-Gibbs artifacts when compared to other directional multi-scale transform methods such as curvelets.

  2. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  3. Regularized spherical polar fourier diffusion MRI with optimal dictionary learning.

    Science.gov (United States)

    Cheng, Jian; Jiang, Tianzi; Deriche, Rachid; Shen, Dinggang; Yap, Pew-Thian

    2013-01-01

    Compressed Sensing (CS) takes advantage of signal sparsity or compressibility and allows superb signal reconstruction from relatively few measurements. Based on CS theory, a suitable dictionary for sparse representation of the signal is required. In diffusion MRI (dMRI), CS methods proposed for reconstruction of diffusion-weighted signal and the Ensemble Average Propagator (EAP) utilize two kinds of Dictionary Learning (DL) methods: 1) Discrete Representation DL (DR-DL), and 2) Continuous Representation DL (CR-DL). DR-DL is susceptible to numerical inaccuracy owing to interpolation and regridding errors in a discretized q-space. In this paper, we propose a novel CR-DL approach, called Dictionary Learning - Spherical Polar Fourier Imaging (DL-SPFI) for effective compressed-sensing reconstruction of the q-space diffusion-weighted signal and the EAP. In DL-SPFI, a dictionary that sparsifies the signal is learned from the space of continuous Gaussian diffusion signals. The learned dictionary is then adaptively applied to different voxels using a weighted LASSO framework for robust signal reconstruction. Compared with the start-of-the-art CR-DL and DR-DL methods proposed by Merlet et al. and Bilgic et al., respectively, our work offers the following advantages. First, the learned dictionary is proved to be optimal for Gaussian diffusion signals. Second, to our knowledge, this is the first work to learn a voxel-adaptive dictionary. The importance of the adaptive dictionary in EAP reconstruction will be demonstrated theoretically and empirically. Third, optimization in DL-SPFI is only performed in a small subspace resided by the SPF coefficients, as opposed to the q-space approach utilized by Merlet et al. We experimentally evaluated DL-SPFI with respect to L1-norm regularized SPFI (L1-SPFI), which uses the original SPF basis, and the DR-DL method proposed by Bilgic et al. The experiment results on synthetic and real data indicate that the learned dictionary produces

  4. An iterative method for Tikhonov regularization with a general linear regularization operator

    NARCIS (Netherlands)

    Hochstenbach, M.E.; Reichel, L.

    2010-01-01

    Tikhonov regularization is one of the most popular approaches to solve discrete ill-posed problems with error-contaminated data. A regularization operator and a suitable value of a regularization parameter have to be chosen. This paper describes an iterative method, based on Golub-Kahan

  5. Regularization and error assignment to unfolded distributions

    CERN Document Server

    Zech, Gunter

    2011-01-01

    The commonly used approach to present unfolded data only in graphical formwith the diagonal error depending on the regularization strength is unsatisfac-tory. It does not permit the adjustment of parameters of theories, the exclusionof theories that are admitted by the observed data and does not allow the com-bination of data from different experiments. We propose fixing the regulariza-tion strength by a p-value criterion, indicating the experimental uncertaintiesindependent of the regularization and publishing the unfolded data in additionwithout regularization. These considerations are illustrated with three differentunfolding and smoothing approaches applied to a toy example.

  6. Manifold Regularized Correlation Object Tracking.

    Science.gov (United States)

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2018-05-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.

  7. Regularization by External Variables

    DEFF Research Database (Denmark)

    Bossolini, Elena; Edwards, R.; Glendinning, P. A.

    2016-01-01

    Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...

  8. Regular expressions cookbook

    CERN Document Server

    Goyvaerts, Jan

    2009-01-01

    This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a

  9. Single image interpolation via adaptive nonlocal sparsity-based modeling.

    Science.gov (United States)

    Romano, Yaniv; Protter, Matan; Elad, Michael

    2014-07-01

    Single image interpolation is a central and extensively studied problem in image processing. A common approach toward the treatment of this problem in recent years is to divide the given image into overlapping patches and process each of them based on a model for natural image patches. Adaptive sparse representation modeling is one such promising image prior, which has been shown to be powerful in filling-in missing pixels in an image. Another force that such algorithms may use is the self-similarity that exists within natural images. Processing groups of related patches together exploits their correspondence, leading often times to improved results. In this paper, we propose a novel image interpolation method, which combines these two forces-nonlocal self-similarities and sparse representation modeling. The proposed method is contrasted with competitive and related algorithms, and demonstrated to achieve state-of-the-art results.

  10. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu

    2014-06-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector. © 2014 Elsevier B.V.

  11. Common and Innovative Visuals: A sparsity modeling framework for video.

    Science.gov (United States)

    Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder

    2014-05-02

    Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.

  12. Regularities of Multifractal Measures

    Indian Academy of Sciences (India)

    First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...

  13. Stochastic analytic regularization

    International Nuclear Information System (INIS)

    Alfaro, J.

    1984-07-01

    Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)

  14. Improved adaptive genetic algorithm with sparsity constraint applied to thermal neutron CT reconstruction of two-phase flow

    Science.gov (United States)

    Yan, Mingfei; Hu, Huasi; Otake, Yoshie; Taketani, Atsushi; Wakabayashi, Yasuo; Yanagimachi, Shinzo; Wang, Sheng; Pan, Ziheng; Hu, Guang

    2018-05-01

    Thermal neutron computer tomography (CT) is a useful tool for visualizing two-phase flow due to its high imaging contrast and strong penetrability of neutrons for tube walls constructed with metallic material. A novel approach for two-phase flow CT reconstruction based on an improved adaptive genetic algorithm with sparsity constraint (IAGA-SC) is proposed in this paper. In the algorithm, the neighborhood mutation operator is used to ensure the continuity of the reconstructed object. The adaptive crossover probability P c and mutation probability P m are improved to help the adaptive genetic algorithm (AGA) achieve the global optimum. The reconstructed results for projection data, obtained from Monte Carlo simulation, indicate that the comprehensive performance of the IAGA-SC algorithm exceeds the adaptive steepest descent-projection onto convex sets (ASD-POCS) algorithm in restoring typical and complex flow regimes. It especially shows great advantages in restoring the simply connected flow regimes and the shape of object. In addition, the CT experiment for two-phase flow phantoms was conducted on the accelerator-driven neutron source to verify the performance of the developed IAGA-SC algorithm.

  15. Entropy Viscosity and L1-based Approximations of PDEs: Exploiting Sparsity

    Science.gov (United States)

    2015-10-23

    AFRL-AFOSR-VA-TR-2015-0337 Entropy Viscosity and L1-based Approximations of PDEs: Exploiting Sparsity Jean-Luc Guermond TEXAS A & M UNIVERSITY 750...REPORT DATE (DD-MM-YYYY) 09-05-2015 2. REPORT TYPE Final report 3. DATES COVERED (From - To) 01-07-2012 - 30-06-2015 4. TITLE AND SUBTITLE Entropy ...conservation equations can be stabilized by using the so-called entropy viscosity method and we proposed to to investigate this new technique. We

  16. Analysis of Coherent Phonon Signals by Sparsity-promoting Dynamic Mode Decomposition

    Science.gov (United States)

    Murata, Shin; Aihara, Shingo; Tokuda, Satoru; Iwamitsu, Kazunori; Mizoguchi, Kohji; Akai, Ichiro; Okada, Masato

    2018-05-01

    We propose a method to decompose normal modes in a coherent phonon (CP) signal by sparsity-promoting dynamic mode decomposition. While the CP signals can be modeled as the sum of finite number of damped oscillators, the conventional method such as Fourier transform adopts continuous bases in a frequency domain. Thus, the uncertainty of frequency appears and it is difficult to estimate the initial phase. Moreover, measurement artifacts are imposed on the CP signal and deforms the Fourier spectrum. In contrast, the proposed method can separate the signal from the artifact precisely and can successfully estimate physical properties of the normal modes.

  17. Multi-view Multi-sparsity Kernel Reconstruction for Multi-class Image Classification

    KAUST Repository

    Zhu, Xiaofeng

    2015-05-28

    This paper addresses the problem of multi-class image classification by proposing a novel multi-view multi-sparsity kernel reconstruction (MMKR for short) model. Given images (including test images and training images) representing with multiple visual features, the MMKR first maps them into a high-dimensional space, e.g., a reproducing kernel Hilbert space (RKHS), where test images are then linearly reconstructed by some representative training images, rather than all of them. Furthermore a classification rule is proposed to classify test images. Experimental results on real datasets show the effectiveness of the proposed MMKR while comparing to state-of-the-art algorithms.

  18. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin

    2014-01-01

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse

  19. Regular expression containment

    DEFF Research Database (Denmark)

    Henglein, Fritz; Nielsen, Lasse

    2011-01-01

    We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...

  20. Supersymmetric dimensional regularization

    International Nuclear Information System (INIS)

    Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.

    1980-01-01

    There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed

  1. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  2. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  3. Sparse regularization for EIT reconstruction incorporating structural information derived from medical imaging.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Mueller-Lisse, Ullrich; Moeller, Knut

    2016-06-01

    Electrical impedance tomography (EIT) reconstructs the conductivity distribution of a domain using electrical data on its boundary. This is an ill-posed inverse problem usually solved on a finite element mesh. For this article, a special regularization method incorporating structural information of the targeted domain is proposed and evaluated. Structural information was obtained either from computed tomography images or from preliminary EIT reconstructions by a modified k-means clustering. The proposed regularization method integrates this structural information into the reconstruction as a soft constraint preferring sparsity in group level. A first evaluation with Monte Carlo simulations indicated that the proposed solver is more robust to noise and the resulting images show fewer artifacts. This finding is supported by real data analysis. The structure based regularization has the potential to balance structural a priori information with data driven reconstruction. It is robust to noise, reduces artifacts and produces images that reflect anatomy and are thus easier to interpret for physicians.

  4. Manifold regularized multitask feature learning for multimodality disease classification.

    Science.gov (United States)

    Jie, Biao; Zhang, Daoqiang; Cheng, Bo; Shen, Dinggang

    2015-02-01

    Multimodality based methods have shown great advantages in classification of Alzheimer's disease (AD) and its prodromal stage, that is, mild cognitive impairment (MCI). Recently, multitask feature selection methods are typically used for joint selection of common features across multiple modalities. However, one disadvantage of existing multimodality based methods is that they ignore the useful data distribution information in each modality, which is essential for subsequent classification. Accordingly, in this paper we propose a manifold regularized multitask feature learning method to preserve both the intrinsic relatedness among multiple modalities of data and the data distribution information in each modality. Specifically, we denote the feature learning on each modality as a single task, and use group-sparsity regularizer to capture the intrinsic relatedness among multiple tasks (i.e., modalities) and jointly select the common features from multiple tasks. Furthermore, we introduce a new manifold-based Laplacian regularizer to preserve the data distribution information from each task. Finally, we use the multikernel support vector machine method to fuse multimodality data for eventual classification. Conversely, we also extend our method to the semisupervised setting, where only partial data are labeled. We evaluate our method using the baseline magnetic resonance imaging (MRI), fluorodeoxyglucose positron emission tomography (FDG-PET), and cerebrospinal fluid (CSF) data of subjects from AD neuroimaging initiative database. The experimental results demonstrate that our proposed method can not only achieve improved classification performance, but also help to discover the disease-related brain regions useful for disease diagnosis. © 2014 Wiley Periodicals, Inc.

  5. COLLAGE-BASED INVERSE PROBLEMS FOR IFSM WITH ENTROPY MAXIMIZATION AND SPARSITY CONSTRAINTS

    Directory of Open Access Journals (Sweden)

    Herb Kunze

    2013-11-01

    Full Text Available We consider the inverse problem associated with IFSM: Given a target function f, find an IFSM, such that its invariant fixed point f is sufficiently close to f in the Lp distance. In this paper, we extend the collage-based method developed by Forte and Vrscay (1995 along two different directions. We first search for a set of mappings that not only minimizes the collage error but also maximizes the entropy of the dynamical system. We then include an extra term in the minimization process which takes into account the sparsity of the set of mappings. In this new formulation, the minimization of collage error is treated as multi-criteria problem: we consider three different and conflicting criteria i.e., collage error, entropy and sparsity. To solve this multi-criteria program we proceed by scalarization and we reduce the model to a single-criterion program by combining all objective functions with different trade-off weights. The results of some numerical computations are presented. Numerical studies indicate that a maximum entropy principle exists for this approximation problem, i.e., that the suboptimal solutions produced by collage coding can be improved at least slightly by adding a maximum entropy criterion.

  6. Sparsity-promoting orthogonal dictionary updating for image reconstruction from highly undersampled magnetic resonance data

    International Nuclear Information System (INIS)

    Huang, Jinhong; Guo, Li; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu

    2015-01-01

    Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy. (paper)

  7. OFDM Radar Space-Time Adaptive Processing by Exploiting Spatio-Temporal Sparsity

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Satyabrata [ORNL

    2013-01-01

    We propose a sparsity-based space-time adaptive processing (STAP) algorithm to detect a slowly-moving target using an orthogonal frequency division multiplexing (OFDM) radar. We observe that the target and interference spectra are inherently sparse in the spatio-temporal domain. Hence, we exploit that sparsity to develop an efficient STAP technique that utilizes considerably lesser number of secondary data and produces an equivalent performance as the other existing STAP techniques. In addition, the use of an OFDM signal increases the frequency diversity of our system, as different scattering centers of a target resonate at different frequencies, and thus improves the target detectability. First, we formulate a realistic sparse-measurement model for an OFDM radar considering both the clutter and jammer as the interfering sources. Then, we apply a residual sparse-recovery technique based on the LASSO estimator to estimate the target and interference covariance matrices, and subsequently compute the optimal STAP-filter weights. Our numerical results demonstrate a comparative performance analysis of the proposed sparse-STAP algorithm with four other existing STAP methods. Furthermore, we discover that the OFDM-STAP filter-weights are adaptable to the frequency-variabilities of the target and interference responses, in addition to the spatio-temporal variabilities. Hence, by better utilizing the frequency variabilities, we propose an adaptive OFDM-waveform design technique, and consequently gain a significant amount of STAP-performance improvement.

  8. Sparsity guided empirical wavelet transform for fault diagnosis of rolling element bearings

    Science.gov (United States)

    Wang, Dong; Zhao, Yang; Yi, Cai; Tsui, Kwok-Leung; Lin, Jianhui

    2018-02-01

    Rolling element bearings are widely used in various industrial machines, such as electric motors, generators, pumps, gearboxes, railway axles, turbines, and helicopter transmissions. Fault diagnosis of rolling element bearings is beneficial to preventing any unexpected accident and reducing economic loss. In the past years, many bearing fault detection methods have been developed. Recently, a new adaptive signal processing method called empirical wavelet transform attracts much attention from readers and engineers and its applications to bearing fault diagnosis have been reported. The main problem of empirical wavelet transform is that Fourier segments required in empirical wavelet transform are strongly dependent on the local maxima of the amplitudes of the Fourier spectrum of a signal, which connotes that Fourier segments are not always reliable and effective if the Fourier spectrum of the signal is complicated and overwhelmed by heavy noises and other strong vibration components. In this paper, sparsity guided empirical wavelet transform is proposed to automatically establish Fourier segments required in empirical wavelet transform for fault diagnosis of rolling element bearings. Industrial bearing fault signals caused by single and multiple railway axle bearing defects are used to verify the effectiveness of the proposed sparsity guided empirical wavelet transform. Results show that the proposed method can automatically discover Fourier segments required in empirical wavelet transform and reveal single and multiple railway axle bearing defects. Besides, some comparisons with three popular signal processing methods including ensemble empirical mode decomposition, the fast kurtogram and the fast spectral correlation are conducted to highlight the superiority of the proposed method.

  9. Sparsity-promoting orthogonal dictionary updating for image reconstruction from highly undersampled magnetic resonance data.

    Science.gov (United States)

    Huang, Jinhong; Guo, Li; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu

    2015-07-21

    Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy.

  10. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  11. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  12. Manifold Regularized Reinforcement Learning.

    Science.gov (United States)

    Li, Hongliang; Liu, Derong; Wang, Ding

    2018-04-01

    This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.

  13. Electrical Resistance Tomography for Visualization of Moving Objects Using a Spatiotemporal Total Variation Regularization Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Chen

    2018-05-01

    Full Text Available Electrical resistance tomography (ERT has been considered as a data collection and image reconstruction method in many multi-phase flow application areas due to its advantages of high speed, low cost and being non-invasive. In order to improve the quality of the reconstructed images, the Total Variation algorithm attracts abundant attention due to its ability to solve large piecewise and discontinuous conductivity distributions. In industrial processing tomography (IPT, techniques such as ERT have been used to extract important flow measurement information. For a moving object inside a pipe, a velocity profile can be calculated from the cross correlation between signals generated from ERT sensors. Many previous studies have used two sets of 2D ERT measurements based on pixel-pixel cross correlation, which requires two ERT systems. In this paper, a method for carrying out flow velocity measurement using a single ERT system is proposed. A novel spatiotemporal total variation regularization approach is utilised to exploit sparsity both in space and time in 4D, and a voxel-voxel cross correlation method is adopted for measurement of flow profile. Result shows that the velocity profile can be calculated with a single ERT system and that the volume fraction and movement can be monitored using the proposed method. Both semi-dynamic experimental and static simulation studies verify the suitability of the proposed method. For in plane velocity profile, a 3D image based on temporal 2D images produces velocity profile with accuracy of less than 1% error and a 4D image for 3D velocity profiling shows an error of 4%.

  14. Symptoms of cybersex addiction can be linked to both approaching and avoiding pornographic stimuli: Results from an analogue sample of regular cybersex users

    Directory of Open Access Journals (Sweden)

    Jan eSnagowski

    2015-05-01

    Full Text Available The phenomenology, classification, and diagnostic criteria of cybersex addiction are discussed controversial. Some approaches point towards similarities to substance dependencies for which approach/avoidance tendencies are crucial mechanisms. Several researchers have argued that within an addiction-related decision situation, individuals might either show tendencies to approach or avoid addiction-related stimuli. In the current study 123 heterosexual males completed an Approach-Avoidance-Task (AAT; Rinck & Becker, 2007 modified with pornographic pictures. During the AAT participants either had to push pornographic stimuli away or pull them towards themselves with a joystick. Sensitivity towards sexual excitation, problematic sexual behavior, and tendencies towards cybersex addiction were assessed with questionnaires. Results showed that individuals with tendencies towards cybersex addiction tended to either approach or avoid pornographic stimuli. Additionally, moderated regression analyses revealed that individuals with high sexual excitation and problematic sexual behavior who showed high approach/avoidance tendencies, reported higher symptoms of cybersex addiction. Analogous to substance dependencies, results suggest that both approach and avoidance tendencies might play a role in cybersex addiction. Moreover, an interaction with sensitivity towards sexual excitation and problematic sexual behavior could have an accumulating effect on the severity of subjective complaints in everyday life due to cybersex use. The findings provide further empirical evidence for similarities between cybersex addiction and substance dependencies. Such similarities could be retraced to a comparable neural processing of cybersex- and drug-related cues.

  15. Symptoms of cybersex addiction can be linked to both approaching and avoiding pornographic stimuli: results from an analog sample of regular cybersex users.

    Science.gov (United States)

    Snagowski, Jan; Brand, Matthias

    2015-01-01

    There is no consensus regarding the phenomenology, classification, and diagnostic criteria of cybersex addiction. Some approaches point toward similarities to substance dependencies for which approach/avoidance tendencies are crucial mechanisms. Several researchers have argued that within an addiction-related decision situation, individuals might either show tendencies to approach or avoid addiction-related stimuli. In the current study 123 heterosexual males completed an Approach-Avoidance-Task (AAT; Rinck and Becker, 2007) modified with pornographic pictures. During the AAT participants either had to push pornographic stimuli away or pull them toward themselves with a joystick. Sensitivity toward sexual excitation, problematic sexual behavior, and tendencies toward cybersex addiction were assessed with questionnaires. Results showed that individuals with tendencies toward cybersex addiction tended to either approach or avoid pornographic stimuli. Additionally, moderated regression analyses revealed that individuals with high sexual excitation and problematic sexual behavior who showed high approach/avoidance tendencies, reported higher symptoms of cybersex addiction. Analogous to substance dependencies, results suggest that both approach and avoidance tendencies might play a role in cybersex addiction. Moreover, an interaction with sensitivity toward sexual excitation and problematic sexual behavior could have an accumulating effect on the severity of subjective complaints in everyday life due to cybersex use. The findings provide further empirical evidence for similarities between cybersex addiction and substance dependencies. Such similarities could be retraced to a comparable neural processing of cybersex- and drug-related cues.

  16. Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront

    KAUST Repository

    Tamamitsu, Miu; Zhang, Yibo; Wang, Hongda; Wu, Yichen; Ozcan, Aydogan

    2017-01-01

    of sparsity metrics used in SoG, specifically, the Gini index (GI) and the Tamura coefficient (TC), for holographic autofocusing on dense/connected or sparse samples. We provide a theoretical analysis predicting that for uniformly distributed image data, TC

  17. SMOS images restoration from L1A data: A sparsity-based variational approach

    OpenAIRE

    Preciozzi, J.; Musé, Pablo; Almansa, A.; Durand, Sylvain; Khazaal, Ali; Rougé, B.

    2014-01-01

    International audience; Data degradation by radio frequency interferences (RFI) is one of the major challenges that SMOS and other interferometers radiometers missions have to face. Although a great number of the illegal emitters were turned off since the mission was launched, not all of the sources were completely removed. Moreover, the data obtained previously is already corrupted by these RFI. Thus, the recovery of brightness temperature from corrupted data by image restoration techniques ...

  18. Landmark matching based retinal image alignment by enforcing sparsity in correspondence matrix.

    Science.gov (United States)

    Zheng, Yuanjie; Daniel, Ebenezer; Hunter, Allan A; Xiao, Rui; Gao, Jianbin; Li, Hongsheng; Maguire, Maureen G; Brainard, David H; Gee, James C

    2014-08-01

    Retinal image alignment is fundamental to many applications in diagnosis of eye diseases. In this paper, we address the problem of landmark matching based retinal image alignment. We propose a novel landmark matching formulation by enforcing sparsity in the correspondence matrix and offer its solutions based on linear programming. The proposed formulation not only enables a joint estimation of the landmark correspondences and a predefined transformation model but also combines the benefits of the softassign strategy (Chui and Rangarajan, 2003) and the combinatorial optimization of linear programming. We also introduced a set of reinforced self-similarities descriptors which can better characterize local photometric and geometric properties of the retinal image. Theoretical analysis and experimental results with both fundus color images and angiogram images show the superior performances of our algorithms to several state-of-the-art techniques. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Activity of invasive slug Limax maximus in relation to climate conditions based on citizen's observations and novel regularization based statistical approaches.

    Science.gov (United States)

    Morii, Yuta; Ohkubo, Yusaku; Watanabe, Sanae

    2018-05-13

    Citizen science is a powerful tool that can be used to resolve the problems of introduced species. An amateur naturalist and author of this paper, S. Watanabe, recorded the total number of Limax maximus (Limacidae, Pulmonata) individuals along a fixed census route almost every day for two years on Hokkaido Island, Japan. L. maximus is an invasive slug considered a pest species of horticultural and agricultural crops. We investigated how weather conditions were correlated to the intensity of slug activity using for the first time in ecology the recently developed statistical analyses, Bayesian regularization regression with comparisons among Laplace, Horseshoe and Horseshoe+ priors for the first time in ecology. The slug counts were compared with meteorological data from 5:00 in the morning on the day of observation (OT- and OD-models) and the day before observation (DBOD-models). The OT- and OD-models were more supported than the DBOD-models based on the WAIC scores, and the meteorological predictors selected in the OT-, OD- and DBOD-models were different. The probability of slug appearance was increased on mornings with higher than 20-year-average humidity (%) and lower than average wind velocity (m/s) and precipitation (mm) values in the OT-models. OD-models showed a pattern similar to OT-models in the probability of slug appearance, but also suggested other meteorological predictors for slug activities; positive effect of solar radiation (MJ) for example. Five meteorological predictors, mean and highest temperature (°C), wind velocity (m/s), precipitation amount (mm) and atmospheric pressure (hPa), were selected as the effective factors for the counts in the DBOD-models. Therefore, the DBOD-models will be valuable for the prediction of slug activity in the future, much like a weather forecast. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Diagrammatic methods in phase-space regularization

    International Nuclear Information System (INIS)

    Bern, Z.; Halpern, M.B.; California Univ., Berkeley

    1987-11-01

    Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)

  1. Diverse Regular Employees and Non-regular Employment (Japanese)

    OpenAIRE

    MORISHIMA Motohiro

    2011-01-01

    Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...

  2. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-04-17

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.

  3. 'Regular' and 'emergency' repair

    International Nuclear Information System (INIS)

    Luchnik, N.V.

    1975-01-01

    Experiments on the combined action of radiation and a DNA inhibitor using Crepis roots and on split-dose irradiation of human lymphocytes lead to the conclusion that there are two types of repair. The 'regular' repair takes place twice in each mitotic cycle and ensures the maintenance of genetic stability. The 'emergency' repair is induced at all stages of the mitotic cycle by high levels of injury. (author)

  4. Regularization of divergent integrals

    OpenAIRE

    Felder, Giovanni; Kazhdan, David

    2016-01-01

    We study the Hadamard finite part of divergent integrals of differential forms with singularities on submanifolds. We give formulae for the dependence of the finite part on the choice of regularization and express them in terms of a suitable local residue map. The cases where the submanifold is a complex hypersurface in a complex manifold and where it is a boundary component of a manifold with boundary, arising in string perturbation theory, are treated in more detail.

  5. Online Manifold Regularization by Dual Ascending Procedure

    OpenAIRE

    Sun, Boliang; Li, Guohui; Jia, Li; Zhang, Hui

    2013-01-01

    We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approache...

  6. Regular Single Valued Neutrosophic Hypergraphs

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Malik

    2016-12-01

    Full Text Available In this paper, we define the regular and totally regular single valued neutrosophic hypergraphs, and discuss the order and size along with properties of regular and totally regular single valued neutrosophic hypergraphs. We also extend work on completeness of single valued neutrosophic hypergraphs.

  7. The geometry of continuum regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-03-01

    This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations

  8. Constraining surface emissions of air pollutants using inverse modelling: method intercomparison and a new two-step two-scale regularization approach

    Energy Technology Data Exchange (ETDEWEB)

    Saide, Pablo (CGRER, Center for Global and Regional Environmental Research, Univ. of Iowa, Iowa City, IA (United States)), e-mail: pablo-saide@uiowa.edu; Bocquet, Marc (Universite Paris-Est, CEREA Joint Laboratory Ecole des Ponts ParisTech and EDF RandD, Champs-sur-Marne (France); INRIA, Paris Rocquencourt Research Center (France)); Osses, Axel (Departamento de Ingeniera Matematica, Universidad de Chile, Santiago (Chile); Centro de Modelamiento Matematico, UMI 2807/Universidad de Chile-CNRS, Santiago (Chile)); Gallardo, Laura (Centro de Modelamiento Matematico, UMI 2807/Universidad de Chile-CNRS, Santiago (Chile); Departamento de Geofisica, Universidad de Chile, Santiago (Chile))

    2011-07-15

    When constraining surface emissions of air pollutants using inverse modelling one often encounters spurious corrections to the inventory at places where emissions and observations are colocated, referred to here as the colocalization problem. Several approaches have been used to deal with this problem: coarsening the spatial resolution of emissions; adding spatial correlations to the covariance matrices; adding constraints on the spatial derivatives into the functional being minimized; and multiplying the emission error covariance matrix by weighting factors. Intercomparison of methods for a carbon monoxide inversion over a city shows that even though all methods diminish the colocalization problem and produce similar general patterns, detailed information can greatly change according to the method used ranging from smooth, isotropic and short range modifications to not so smooth, non-isotropic and long range modifications. Poisson (non-Gaussian) and Gaussian assumptions both show these patterns, but for the Poisson case the emissions are naturally restricted to be positive and changes are given by means of multiplicative correction factors, producing results closer to the true nature of emission errors. Finally, we propose and test a new two-step, two-scale, fully Bayesian approach that deals with the colocalization problem and can be implemented for any prior density distribution

  9. Annotation of Regular Polysemy

    DEFF Research Database (Denmark)

    Martinez Alonso, Hector

    Regular polysemy has received a lot of attention from the theory of lexical semantics and from computational linguistics. However, there is no consensus on how to represent the sense of underspecified examples at the token level, namely when annotating or disambiguating senses of metonymic words...... and metonymic. We have conducted an analysis in English, Danish and Spanish. Later on, we have tried to replicate the human judgments by means of unsupervised and semi-supervised sense prediction. The automatic sense-prediction systems have been unable to find empiric evidence for the underspecified sense, even...

  10. Regularity of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht

    2010-01-01

    "Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t

  11. Regularities of radiation heredity

    International Nuclear Information System (INIS)

    Skakov, M.K.; Melikhov, V.D.

    2001-01-01

    One analyzed regularities of radiation heredity in metals and alloys. One made conclusion about thermodynamically irreversible changes in structure of materials under irradiation. One offers possible ways of heredity transmittance of radiation effects at high-temperature transformations in the materials. Phenomenon of radiation heredity may be turned to practical use to control structure of liquid metal and, respectively, structure of ingot via preliminary radiation treatment of charge. Concentration microheterogeneities in material defect structure induced by preliminary irradiation represent the genetic factor of radiation heredity [ru

  12. Geometric regularizations and dual conifold transitions

    International Nuclear Information System (INIS)

    Landsteiner, Karl; Lazaroiu, Calin I.

    2003-01-01

    We consider a geometric regularization for the class of conifold transitions relating D-brane systems on noncompact Calabi-Yau spaces to certain flux backgrounds. This regularization respects the SL(2,Z) invariance of the flux superpotential, and allows for computation of the relevant periods through the method of Picard-Fuchs equations. The regularized geometry is a noncompact Calabi-Yau which can be viewed as a monodromic fibration, with the nontrivial monodromy being induced by the regulator. It reduces to the original, non-monodromic background when the regulator is removed. Using this regularization, we discuss the simple case of the local conifold, and show how the relevant field-theoretic information can be extracted in this approach. (author)

  13. A robust holographic autofocusing criterion based on edge sparsity: comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront

    Science.gov (United States)

    Tamamitsu, Miu; Zhang, Yibo; Wang, Hongda; Wu, Yichen; Ozcan, Aydogan

    2018-02-01

    The Sparsity of the Gradient (SoG) is a robust autofocusing criterion for holography, where the gradient modulus of the complex refocused hologram is calculated, on which a sparsity metric is applied. Here, we compare two different choices of sparsity metrics used in SoG, specifically, the Gini index (GI) and the Tamura coefficient (TC), for holographic autofocusing on dense/connected or sparse samples. We provide a theoretical analysis predicting that for uniformly distributed image data, TC and GI exhibit similar behavior, while for naturally sparse images containing few high-valued signal entries and many low-valued noisy background pixels, TC is more sensitive to distribution changes in the signal and more resistive to background noise. These predictions are also confirmed by experimental results using SoG-based holographic autofocusing on dense and connected samples (such as stained breast tissue sections) as well as highly sparse samples (such as isolated Giardia lamblia cysts). Through these experiments, we found that ToG and GoG offer almost identical autofocusing performance on dense and connected samples, whereas for naturally sparse samples, GoG should be calculated on a relatively small region of interest (ROI) closely surrounding the object, while ToG offers more flexibility in choosing a larger ROI containing more background pixels.

  14. ℓ2,1 Norm and Hessian Regularized Non-Negative Matrix Factorization with Discriminability for Data Representation

    Directory of Open Access Journals (Sweden)

    Peng Luo

    2017-09-01

    Full Text Available Matrix factorization based methods have widely been used in data representation. Among them, Non-negative Matrix Factorization (NMF is a promising technique owing to its psychological and physiological interpretation of spontaneously occurring data. On one hand, although traditional Laplacian regularization can enhance the performance of NMF, it still suffers from the problem of its weak extrapolating ability. On the other hand, standard NMF disregards the discriminative information hidden in the data and cannot guarantee the sparsity of the factor matrices. In this paper, a novel algorithm called ℓ 2 , 1 norm and Hessian Regularized Non-negative Matrix Factorization with Discriminability (ℓ 2 , 1 HNMFD, is developed to overcome the aforementioned problems. In ℓ 2 , 1 HNMFD, Hessian regularization is introduced in the framework of NMF to capture the intrinsic manifold structure of the data. ℓ 2 , 1 norm constraints and approximation orthogonal constraints are added to assure the group sparsity of encoding matrix and characterize the discriminative information of the data simultaneously. To solve the objective function, an efficient optimization scheme is developed to settle it. Our experimental results on five benchmark data sets have demonstrated that ℓ 2 , 1 HNMFD can learn better data representation and provide better clustering results.

  15. Correlation-constrained and sparsity-controlled vector autoregressive model for spatio-temporal wind power forecasting

    DEFF Research Database (Denmark)

    Zhao, Yongning; Ye, Lin; Pinson, Pierre

    2018-01-01

    The ever-increasing number of wind farms has brought both challenges and opportunities in the development of wind power forecasting techniques to take advantage of interdependenciesbetweentensorhundredsofspatiallydistributedwind farms, e.g., over a region. In this paper, a Sparsity-Controlled Vec......The ever-increasing number of wind farms has brought both challenges and opportunities in the development of wind power forecasting techniques to take advantage of interdependenciesbetweentensorhundredsofspatiallydistributedwind farms, e.g., over a region. In this paper, a Sparsity...... matrices in direct manner. However this original SC-VAR is difficult to implement due to its complicated constraints and the lack of guidelines for setting its parameters. To reduce the complexity of this MINLP and to make it possible to incorporate prior expert knowledge to benefit model building...

  16. Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront

    KAUST Repository

    Tamamitsu, Miu

    2017-08-27

    The Sparsity of the Gradient (SoG) is a robust autofocusing criterion for holography, where the gradient modulus of the complex refocused hologram is calculated, on which a sparsity metric is applied. Here, we compare two different choices of sparsity metrics used in SoG, specifically, the Gini index (GI) and the Tamura coefficient (TC), for holographic autofocusing on dense/connected or sparse samples. We provide a theoretical analysis predicting that for uniformly distributed image data, TC and GI exhibit similar behavior, while for naturally sparse images containing few high-valued signal entries and many low-valued noisy background pixels, TC is more sensitive to distribution changes in the signal and more resistive to background noise. These predictions are also confirmed by experimental results using SoG-based holographic autofocusing on dense and connected samples (such as stained breast tissue sections) as well as highly sparse samples (such as isolated Giardia lamblia cysts). Through these experiments, we found that ToG and GoG offer almost identical autofocusing performance on dense and connected samples, whereas for naturally sparse samples, GoG should be calculated on a relatively small region of interest (ROI) closely surrounding the object, while ToG offers more flexibility in choosing a larger ROI containing more background pixels.

  17. Effective field theory dimensional regularization

    International Nuclear Information System (INIS)

    Lehmann, Dirk; Prezeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed

  18. Effective field theory dimensional regularization

    Science.gov (United States)

    Lehmann, Dirk; Prézeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.

  19. Spectrum Sensing and Primary User Localization in Cognitive Radio Networks via Sparsity

    Directory of Open Access Journals (Sweden)

    Lanchao Liu

    2016-01-01

    Full Text Available The theory of compressive sensing (CS has been employed to detect available spectrum resource in cognitive radio (CR networks recently. Capitalizing on the spectrum resource underutilization and spatial sparsity of primary user (PU locations, CS enables the identification of the unused spectrum bands and PU locations at a low sampling rate. Although CS has been studied in the cooperative spectrum sensing mechanism in which CR nodes work collaboratively to accomplish the spectrum sensing and PU localization task, many important issues remain unsettled. Does the designed compressive spectrum sensing mechanism satisfy the Restricted Isometry Property, which guarantees a successful recovery of the original sparse signal? Can the spectrum sensing results help the localization of PUs? What are the characteristics of localization errors? To answer those questions, we try to justify the applicability of the CS theory to the compressive spectrum sensing framework in this paper, and propose a design of PU localization utilizing the spectrum usage information. The localization error is analyzed by the Cramér-Rao lower bound, which can be exploited to improve the localization performance. Detail analysis and simulations are presented to support the claims and demonstrate the efficacy and efficiency of the proposed mechanism.

  20. Gear fault diagnosis based on the structured sparsity time-frequency analysis

    Science.gov (United States)

    Sun, Ruobin; Yang, Zhibo; Chen, Xuefeng; Tian, Shaohua; Xie, Yong

    2018-03-01

    Over the last decade, sparse representation has become a powerful paradigm in mechanical fault diagnosis due to its excellent capability and the high flexibility for complex signal description. The structured sparsity time-frequency analysis (SSTFA) is a novel signal processing method, which utilizes mixed-norm priors on time-frequency coefficients to obtain a fine match for the structure of signals. In order to extract the transient feature from gear vibration signals, a gear fault diagnosis method based on SSTFA is proposed in this work. The steady modulation components and impulsive components of the defective gear vibration signals can be extracted simultaneously by choosing different time-frequency neighborhood and generalized thresholding operators. Besides, the time-frequency distribution with high resolution is obtained by piling different components in the same diagram. The diagnostic conclusion can be made according to the envelope spectrum of the impulsive components or by the periodicity of impulses. The effectiveness of the method is verified by numerical simulations, and the vibration signals registered from a gearbox fault simulator and a wind turbine. To validate the efficiency of the presented methodology, comparisons are made among some state-of-the-art vibration separation methods and the traditional time-frequency analysis methods. The comparisons show that the proposed method possesses advantages in separating feature signals under strong noise and accounting for the inner time-frequency structure of the gear vibration signals.

  1. 75 FR 76006 - Regular Meeting

    Science.gov (United States)

    2010-12-07

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. ACTION: Regular meeting. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held...

  2. Ensemble manifold regularization.

    Science.gov (United States)

    Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng

    2012-06-01

    We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.

  3. Regularization and Complexity Control in Feed-forward Networks

    OpenAIRE

    Bishop, C. M.

    1995-01-01

    In this paper we consider four alternative approaches to complexity control in feed-forward networks based respectively on architecture selection, regularization, early stopping, and training with noise. We show that there are close similarities between these approaches and we argue that, for most practical applications, the technique of regularization should be the method of choice.

  4. MRI-based liver iron content determination at 3 T in regularly transfused patients by signal intensity ratio using an alternative analysis approach based on R{sub 2}* theory

    Energy Technology Data Exchange (ETDEWEB)

    Wunderlich, A.P. [Universitaetsklinikum Ulm (Germany). Section for Experimental Radiology; Cario, H. [Universitaetsklinik Ulm (Germany). Children' s Hospital; Bommer, M. [Alb-Fils-Clinics, Goeppingen (Germany). Hematology and Oncology Dept.; Beer, M.; Schmidt, S.A. [Universitaetsklinikum Ulm (Germany). Dept. for Diagnostic and Interventional Radiology; Juchems, M.S. [Klinikum Konstanz (Germany). Diagnostic and Interventional Radiology

    2016-09-15

    To evaluate the feasibility of addressing liver iron content (LIC) in regularly transfused patients by MR imaging at 3 T based on the signal intensity ratio (SIR). An innovative data analysis approach was developed for this purpose. 47 consecutive examinations of regularly transfused patients were included. In all cases, we expected high LIC levels. Patients were scanned with MRI at 3 T with multi-echo gradient echo sequences (GRE) at four different flip angles between 20 and 90 with echo times (TE) ranging from 0.9 to 9.8 ms. Spin-echo protocols were acquired to determine the LIC with a reference MRI method working at 1.5 T. 3 T GRE data were analyzed using the liver-to-muscle SIR. Since the method known for 1.5 T was not expected to be applicable for analyzing 3 T data, theoretic dependence of the SIR on the LIC was derived from the equation describing R{sub 2}* signal decay. Obtained SIR values were correlated to reference LIC to get a relation for calculating LIC from SIR quantities. LIC values and their uncertainties were determined from GRE data and correlated to LIC reference values. For two LIC thresholds, the diagnostic accuracy was determined. LIC was reliably determined from SIR in our patient cohort even for large LIC values. Median of LIC uncertainties was 10%, and the diagnostic accuracy was 0.92 and 0.91, respectively. Determination of even high LIC, resulting in small SIR values, is feasible at 3 T using appropriate SIR analysis.

  5. Infrared dim moving target tracking via sparsity-based discriminative classifier and convolutional network

    Science.gov (United States)

    Qian, Kun; Zhou, Huixin; Wang, Bingjian; Song, Shangzhen; Zhao, Dong

    2017-11-01

    Infrared dim and small target tracking is a great challenging task. The main challenge for target tracking is to account for appearance change of an object, which submerges in the cluttered background. An efficient appearance model that exploits both the global template and local representation over infrared image sequences is constructed for dim moving target tracking. A Sparsity-based Discriminative Classifier (SDC) and a Convolutional Network-based Generative Model (CNGM) are combined with a prior model. In the SDC model, a sparse representation-based algorithm is adopted to calculate the confidence value that assigns more weights to target templates than negative background templates. In the CNGM model, simple cell feature maps are obtained by calculating the convolution between target templates and fixed filters, which are extracted from the target region at the first frame. These maps measure similarities between each filter and local intensity patterns across the target template, therefore encoding its local structural information. Then, all the maps form a representation, preserving the inner geometric layout of a candidate template. Furthermore, the fixed target template set is processed via an efficient prior model. The same operation is applied to candidate templates in the CNGM model. The online update scheme not only accounts for appearance variations but also alleviates the migration problem. At last, collaborative confidence values of particles are utilized to generate particles' importance weights. Experiments on various infrared sequences have validated the tracking capability of the presented algorithm. Experimental results show that this algorithm runs in real-time and provides a higher accuracy than state of the art algorithms.

  6. Improving Weak Lensing Mass Map Reconstructions using Gaussian and Sparsity Priors: Application to DES SV

    Science.gov (United States)

    Jeffrey, N.; Abdalla, F. B.; Lahav, O.; Lanusse, F.; Starck, J.-L.; Leonard, A.; Kirk, D.; Chang, C.; Baxter, E.; Kacprzak, T.; Seitz, S.; Vikram, V.; Whiteway, L.; Abbott, T. M. C.; Allam, S.; Avila, S.; Bertin, E.; Brooks, D.; Rosell, A. Carnero; Kind, M. Carrasco; Carretero, J.; Castander, F. J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Davis, C.; De Vicente, J.; Desai, S.; Doel, P.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; Hartley, W. G.; Honscheid, K.; Hoyle, B.; James, D. J.; Jarvis, M.; Kuehn, K.; Lima, M.; Lin, H.; March, M.; Melchior, P.; Menanteau, F.; Miquel, R.; Plazas, A. A.; Reil, K.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Walker, A. R.

    2018-05-01

    Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion, not accounting for survey masks or noise. The Wiener filter is well-motivated for Gaussian density fields in a Bayesian framework. GLIMPSE uses sparsity, aiming to reconstruct non-linearities in the density field. We compare these methods with several tests using public Dark Energy Survey (DES) Science Verification (SV) data and realistic DES simulations. The Wiener filter and GLIMPSE offer substantial improvements over smoothed KS with a range of metrics. Both the Wiener filter and GLIMPSE convergence reconstructions show a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated ΛCDM shear catalogues and catalogues with no mass fluctuations (a standard data vector when inferring cosmology from peak statistics); the maximum signal-to-noise of these peak statistics is increased by a factor of 3.5 for the Wiener filter and 9 for GLIMPSE. With simulations we measure the reconstruction of the harmonic phases; the phase residuals' concentration is improved 17% by GLIMPSE and 18% by the Wiener filter. The correlation between reconstructions from data and foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE.

  7. Improving Weak Lensing Mass Map Reconstructions using Gaussian and Sparsity Priors: Application to DES SV

    Energy Technology Data Exchange (ETDEWEB)

    Jeffrey, N.; et al.

    2018-01-26

    Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in the density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged

  8. Limited angle CT reconstruction by simultaneous spatial and Radon domain regularization based on TV and data-driven tight frame

    Science.gov (United States)

    Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin

    2018-02-01

    Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model

  9. Adaptive Regularization of Neural Classifiers

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai

    1997-01-01

    We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...

  10. Fractional Regularization Term for Variational Image Registration

    Directory of Open Access Journals (Sweden)

    Rafael Verdú-Monedero

    2009-01-01

    Full Text Available Image registration is a widely used task of image analysis with applications in many fields. Its classical formulation and current improvements are given in the spatial domain. In this paper a regularization term based on fractional order derivatives is formulated. This term is defined and implemented in the frequency domain by translating the energy functional into the frequency domain and obtaining the Euler-Lagrange equations which minimize it. The new regularization term leads to a simple formulation and design, being applicable to higher dimensions by using the corresponding multidimensional Fourier transform. The proposed regularization term allows for a real gradual transition from a diffusion registration to a curvature registration which is best suited to some applications and it is not possible in the spatial domain. Results with 3D actual images show the validity of this approach.

  11. Online Manifold Regularization by Dual Ascending Procedure

    Directory of Open Access Journals (Sweden)

    Boliang Sun

    2013-01-01

    Full Text Available We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approaches. An important conclusion is that our online MR algorithms can handle the settings where the target hypothesis is not fixed but drifts with the sequence of examples. We also recap and draw connections to earlier works. This paper paves a way to the design and analysis of online manifold regularization algorithms.

  12. Multimodal manifold-regularized transfer learning for MCI conversion prediction.

    Science.gov (United States)

    Cheng, Bo; Liu, Mingxia; Suk, Heung-Il; Shen, Dinggang; Zhang, Daoqiang

    2015-12-01

    As the early stage of Alzheimer's disease (AD), mild cognitive impairment (MCI) has high chance to convert to AD. Effective prediction of such conversion from MCI to AD is of great importance for early diagnosis of AD and also for evaluating AD risk pre-symptomatically. Unlike most previous methods that used only the samples from a target domain to train a classifier, in this paper, we propose a novel multimodal manifold-regularized transfer learning (M2TL) method that jointly utilizes samples from another domain (e.g., AD vs. normal controls (NC)) as well as unlabeled samples to boost the performance of the MCI conversion prediction. Specifically, the proposed M2TL method includes two key components. The first one is a kernel-based maximum mean discrepancy criterion, which helps eliminate the potential negative effect induced by the distributional difference between the auxiliary domain (i.e., AD and NC) and the target domain (i.e., MCI converters (MCI-C) and MCI non-converters (MCI-NC)). The second one is a semi-supervised multimodal manifold-regularized least squares classification method, where the target-domain samples, the auxiliary-domain samples, and the unlabeled samples can be jointly used for training our classifier. Furthermore, with the integration of a group sparsity constraint into our objective function, the proposed M2TL has a capability of selecting the informative samples to build a robust classifier. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database validate the effectiveness of the proposed method by significantly improving the classification accuracy of 80.1 % for MCI conversion prediction, and also outperforming the state-of-the-art methods.

  13. PRIFIRA: General regularization using prior-conditioning for fast radio interferometric imaging†

    Science.gov (United States)

    Naghibzadeh, Shahrzad; van der Veen, Alle-Jan

    2018-06-01

    Image formation in radio astronomy is a large-scale inverse problem that is inherently ill-posed. We present a general algorithmic framework based on a Bayesian-inspired regularized maximum likelihood formulation of the radio astronomical imaging problem with a focus on diffuse emission recovery from limited noisy correlation data. The algorithm is dubbed PRIor-conditioned Fast Iterative Radio Astronomy (PRIFIRA) and is based on a direct embodiment of the regularization operator into the system by right preconditioning. The resulting system is then solved using an iterative method based on projections onto Krylov subspaces. We motivate the use of a beamformed image (which includes the classical "dirty image") as an efficient prior-conditioner. Iterative reweighting schemes generalize the algorithmic framework and can account for different regularization operators that encourage sparsity of the solution. The performance of the proposed method is evaluated based on simulated one- and two-dimensional array arrangements as well as actual data from the core stations of the Low Frequency Array radio telescope antenna configuration, and compared to state-of-the-art imaging techniques. We show the generality of the proposed method in terms of regularization schemes while maintaining a competitive reconstruction quality with the current reconstruction techniques. Furthermore, we show that exploiting Krylov subspace methods together with the proper noise-based stopping criteria results in a great improvement in imaging efficiency.

  14. Stable and efficient Q-compensated least-squares migration with compressive sensing, sparsity-promoting, and preconditioning

    Science.gov (United States)

    Chai, Xintao; Wang, Shangxu; Tang, Genyang; Meng, Xiangcui

    2017-10-01

    The anelastic effects of subsurface media decrease the amplitude and distort the phase of propagating wave. These effects, also referred to as the earth's Q filtering effects, diminish seismic resolution. Ignoring anelastic effects during seismic imaging process generates an image with reduced amplitude and incorrect position of reflectors, especially for highly absorptive media. The numerical instability and the expensive computational cost are major concerns when compensating for anelastic effects during migration. We propose a stable and efficient Q-compensated imaging methodology with compressive sensing, sparsity-promoting, and preconditioning. The stability is achieved by using the Born operator for forward modeling and the adjoint operator for back propagating the residual wavefields. Constructing the attenuation-compensated operators by reversing the sign of attenuation operator is avoided. The method proposed is always stable. To reduce the computational cost that is proportional to the number of wave-equation to be solved (thereby the number of frequencies, source experiments, and iterations), we first subsample over both frequencies and source experiments. We mitigate the artifacts caused by the dimensionality reduction via promoting sparsity of the imaging solutions. We further employ depth- and Q-preconditioning operators to accelerate the convergence rate of iterative migration. We adopt a relatively simple linearized Bregman method to solve the sparsity-promoting imaging problem. Singular value decomposition analysis of the forward operator reveals that attenuation increases the condition number of migration operator, making the imaging problem more ill-conditioned. The visco-acoustic imaging problem converges slower than the acoustic case. The stronger the attenuation, the slower the convergence rate. The preconditioning strategy evidently decreases the condition number of migration operator, which makes the imaging problem less ill-conditioned and

  15. Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery

    Directory of Open Access Journals (Sweden)

    Lingjun Liu

    2017-01-01

    Full Text Available This paper presents a variant of the iterative shrinkage-thresholding (IST algorithm, called backtracking-based adaptive IST (BAIST, for image compressive sensing (CS reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques.

  16. 75 FR 53966 - Regular Meeting

    Science.gov (United States)

    2010-09-02

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm...

  17. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  18. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.

    2012-03-11

    The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).

  19. Continuum-regularized quantum gravity

    International Nuclear Information System (INIS)

    Chan Huesum; Halpern, M.B.

    1987-01-01

    The recent continuum regularization of d-dimensional Euclidean gravity is generalized to arbitrary power-law measure and studied in some detail as a representative example of coordinate-invariant regularization. The weak-coupling expansion of the theory illustrates a generic geometrization of regularized Schwinger-Dyson rules, generalizing previous rules in flat space and flat superspace. The rules are applied in a non-trivial explicit check of Einstein invariance at one loop: the cosmological counterterm is computed and its contribution is included in a verification that the graviton mass is zero. (orig.)

  20. New regular black hole solutions

    International Nuclear Information System (INIS)

    Lemos, Jose P. S.; Zanchin, Vilson T.

    2011-01-01

    In the present work we consider general relativity coupled to Maxwell's electromagnetism and charged matter. Under the assumption of spherical symmetry, there is a particular class of solutions that correspond to regular charged black holes whose interior region is de Sitter, the exterior region is Reissner-Nordstroem and there is a charged thin-layer in-between the two. The main physical and geometrical properties of such charged regular black holes are analyzed.

  1. Regular variation on measure chains

    Czech Academy of Sciences Publication Activity Database

    Řehák, Pavel; Vitovec, J.

    2010-01-01

    Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www.sciencedirect.com/science/article/pii/S0362546X09008475

  2. Manifold Regularized Correlation Object Tracking

    OpenAIRE

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2017-01-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped fr...

  3. On geodesics in low regularity

    Science.gov (United States)

    Sämann, Clemens; Steinbauer, Roland

    2018-02-01

    We consider geodesics in both Riemannian and Lorentzian manifolds with metrics of low regularity. We discuss existence of extremal curves for continuous metrics and present several old and new examples that highlight their subtle interrelation with solutions of the geodesic equations. Then we turn to the initial value problem for geodesics for locally Lipschitz continuous metrics and generalize recent results on existence, regularity and uniqueness of solutions in the sense of Filippov.

  4. Total-variation regularization with bound constraints

    International Nuclear Information System (INIS)

    Chartrand, Rick; Wohlberg, Brendt

    2009-01-01

    We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.

  5. Supporting primary school teachers in differentiating in the regular classroom

    NARCIS (Netherlands)

    Eysink, Tessa H.S.; Hulsbeek, Manon; Gijlers, Hannie

    Many primary school teachers experience difficulties in effectively differentiating in the regular classroom. This study investigated the effect of the STIP-approach on teachers' differentiation activities and self-efficacy, and children's learning outcomes and instructional value. Teachers using

  6. Geometric continuum regularization of quantum field theory

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1989-01-01

    An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs

  7. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA for L p -Regularization Using the Multiple Sub-Dictionary Representation

    Directory of Open Access Journals (Sweden)

    Yunyi Li

    2017-12-01

    Full Text Available Both L 1 / 2 and L 2 / 3 are two typical non-convex regularizations of L p ( 0 < p < 1 , which can be employed to obtain a sparser solution than the L 1 regularization. Recently, the multiple-state sparse transformation strategy has been developed to exploit the sparsity in L 1 regularization for sparse signal recovery, which combines the iterative reweighted algorithms. To further exploit the sparse structure of signal and image, this paper adopts multiple dictionary sparse transform strategies for the two typical cases p ∈ { 1 / 2 ,   2 / 3 } based on an iterative L p thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA. Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L 1 algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  8. Extreme values, regular variation and point processes

    CERN Document Server

    Resnick, Sidney I

    1987-01-01

    Extremes Values, Regular Variation and Point Processes is a readable and efficient account of the fundamental mathematical and stochastic process techniques needed to study the behavior of extreme values of phenomena based on independent and identically distributed random variables and vectors It presents a coherent treatment of the distributional and sample path fundamental properties of extremes and records It emphasizes the core primacy of three topics necessary for understanding extremes the analytical theory of regularly varying functions; the probabilistic theory of point processes and random measures; and the link to asymptotic distribution approximations provided by the theory of weak convergence of probability measures in metric spaces The book is self-contained and requires an introductory measure-theoretic course in probability as a prerequisite Almost all sections have an extensive list of exercises which extend developments in the text, offer alternate approaches, test mastery and provide for enj...

  9. Metric regularity and subdifferential calculus

    International Nuclear Information System (INIS)

    Ioffe, A D

    2000-01-01

    The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces

  10. Dimensional regularization in configuration space

    International Nuclear Information System (INIS)

    Bollini, C.G.; Giambiagi, J.J.

    1995-09-01

    Dimensional regularization is introduced in configuration space by Fourier transforming in D-dimensions the perturbative momentum space Green functions. For this transformation, Bochner theorem is used, no extra parameters, such as those of Feynman or Bogoliubov-Shirkov are needed for convolutions. The regularized causal functions in x-space have ν-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant functions of ν. Several example are discussed. (author). 9 refs

  11. Regular algebra and finite machines

    CERN Document Server

    Conway, John Horton

    2012-01-01

    World-famous mathematician John H. Conway based this classic text on a 1966 course he taught at Cambridge University. Geared toward graduate students of mathematics, it will also prove a valuable guide to researchers and professional mathematicians.His topics cover Moore's theory of experiments, Kleene's theory of regular events and expressions, Kleene algebras, the differential calculus of events, factors and the factor matrix, and the theory of operators. Additional subjects include event classes and operator classes, some regulator algebras, context-free languages, communicative regular alg

  12. Matrix regularization of 4-manifolds

    OpenAIRE

    Trzetrzelewski, M.

    2012-01-01

    We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S...

  13. Atomic orbital-based SOS-MP2 with tensor hypercontraction. I. GPU-based tensor construction and exploiting sparsity.

    Science.gov (United States)

    Song, Chenchen; Martínez, Todd J

    2016-05-07

    We present a tensor hypercontracted (THC) scaled opposite spin second order Møller-Plesset perturbation theory (SOS-MP2) method. By using THC, we reduce the formal scaling of SOS-MP2 with respect to molecular size from quartic to cubic. We achieve further efficiency by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs) to accelerate integral construction and matrix multiplication. The practical scaling of GPU-accelerated atomic orbital-based THC-SOS-MP2 calculations is found to be N(2.6) for reference data sets of water clusters and alanine polypeptides containing up to 1600 basis functions. The errors in correlation energy with respect to density-fitting-SOS-MP2 are less than 0.5 kcal/mol for all systems tested (up to 162 atoms).

  14. Atomic orbital-based SOS-MP2 with tensor hypercontraction. I. GPU-based tensor construction and exploiting sparsity

    Energy Technology Data Exchange (ETDEWEB)

    Song, Chenchen; Martínez, Todd J. [Department of Chemistry and the PULSE Institute, Stanford University, Stanford, California 94305 (United States); SLAC National Accelerator Laboratory, Menlo Park, California 94025 (United States)

    2016-05-07

    We present a tensor hypercontracted (THC) scaled opposite spin second order Møller-Plesset perturbation theory (SOS-MP2) method. By using THC, we reduce the formal scaling of SOS-MP2 with respect to molecular size from quartic to cubic. We achieve further efficiency by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs) to accelerate integral construction and matrix multiplication. The practical scaling of GPU-accelerated atomic orbital-based THC-SOS-MP2 calculations is found to be N{sup 2.6} for reference data sets of water clusters and alanine polypeptides containing up to 1600 basis functions. The errors in correlation energy with respect to density-fitting-SOS-MP2 are less than 0.5 kcal/mol for all systems tested (up to 162 atoms).

  15. Effective Alternating Direction Optimization Methods for Sparsity-Constrained Blind Image Deblurring

    Directory of Open Access Journals (Sweden)

    Naixue Xiong

    2017-01-01

    Full Text Available Single-image blind deblurring for imaging sensors in the Internet of Things (IoT is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.

  16. Learning regularization parameters for general-form Tikhonov

    International Nuclear Information System (INIS)

    Chung, Julianne; Español, Malena I

    2017-01-01

    Computing regularization parameters for general-form Tikhonov regularization can be an expensive and difficult task, especially if multiple parameters or many solutions need to be computed in real time. In this work, we assume training data is available and describe an efficient learning approach for computing regularization parameters that can be used for a large set of problems. We consider an empirical Bayes risk minimization framework for finding regularization parameters that minimize average errors for the training data. We first extend methods from Chung et al (2011 SIAM J. Sci. Comput. 33 3132–52) to the general-form Tikhonov problem. Then we develop a learning approach for multi-parameter Tikhonov problems, for the case where all involved matrices are simultaneously diagonalizable. For problems where this is not the case, we describe an approach to compute near-optimal regularization parameters by using operator approximations for the original problem. Finally, we propose a new class of regularizing filters, where solutions correspond to multi-parameter Tikhonov solutions, that requires less data than previously proposed optimal error filters, avoids the generalized SVD, and allows flexibility and novelty in the choice of regularization matrices. Numerical results for 1D and 2D examples using different norms on the errors show the effectiveness of our methods. (paper)

  17. Regularization of Nonmonotone Variational Inequalities

    International Nuclear Information System (INIS)

    Konnov, Igor V.; Ali, M.S.S.; Mazurkevich, E.O.

    2006-01-01

    In this paper we extend the Tikhonov-Browder regularization scheme from monotone to rather a general class of nonmonotone multivalued variational inequalities. We show that their convergence conditions hold for some classes of perfectly and nonperfectly competitive economic equilibrium problems

  18. Lattice regularized chiral perturbation theory

    International Nuclear Information System (INIS)

    Borasoy, Bugra; Lewis, Randy; Ouimet, Pierre-Philippe A.

    2004-01-01

    Chiral perturbation theory can be defined and regularized on a spacetime lattice. A few motivations are discussed here, and an explicit lattice Lagrangian is reviewed. A particular aspect of the connection between lattice chiral perturbation theory and lattice QCD is explored through a study of the Wess-Zumino-Witten term

  19. 76 FR 3629 - Regular Meeting

    Science.gov (United States)

    2011-01-20

    ... Meeting SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held at the offices of the Farm... meeting of the Board will be open to the [[Page 3630

  20. Forcing absoluteness and regularity properties

    NARCIS (Netherlands)

    Ikegami, D.

    2010-01-01

    For a large natural class of forcing notions, we prove general equivalence theorems between forcing absoluteness statements, regularity properties, and transcendence properties over L and the core model K. We use our results to answer open questions from set theory of the reals.

  1. Globals of Completely Regular Monoids

    Institute of Scientific and Technical Information of China (English)

    Wu Qian-qian; Gan Ai-ping; Du Xian-kun

    2015-01-01

    An element of a semigroup S is called irreducible if it cannot be expressed as a product of two elements in S both distinct from itself. In this paper we show that the class C of all completely regular monoids with irreducible identity elements satisfies the strong isomorphism property and so it is globally determined.

  2. Fluid queues and regular variation

    NARCIS (Netherlands)

    Boxma, O.J.

    1996-01-01

    This paper considers a fluid queueing system, fed by N independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index ¿. We show that its fat tail gives rise to an even

  3. Fluid queues and regular variation

    NARCIS (Netherlands)

    O.J. Boxma (Onno)

    1996-01-01

    textabstractThis paper considers a fluid queueing system, fed by $N$ independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index $zeta$. We show that its fat tail

  4. Empirical laws, regularity and necessity

    NARCIS (Netherlands)

    Koningsveld, H.

    1973-01-01

    In this book I have tried to develop an analysis of the concept of an empirical law, an analysis that differs in many ways from the alternative analyse's found in contemporary literature dealing with the subject.

    1 am referring especially to two well-known views, viz. the regularity and

  5. Interval matrices: Regularity generates singularity

    Czech Academy of Sciences Publication Activity Database

    Rohn, Jiří; Shary, S.P.

    2018-01-01

    Roč. 540, 1 March (2018), s. 149-159 ISSN 0024-3795 Institutional support: RVO:67985807 Keywords : interval matrix * regularity * singularity * P-matrix * absolute value equation * diagonally singilarizable matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016

  6. Regularization in Matrix Relevance Learning

    NARCIS (Netherlands)

    Schneider, Petra; Bunte, Kerstin; Stiekema, Han; Hammer, Barbara; Villmann, Thomas; Biehl, Michael

    A In this paper, we present a regularization technique to extend recently proposed matrix learning schemes in learning vector quantization (LVQ). These learning algorithms extend the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, metric learning can

  7. Solution path for manifold regularized semisupervised classification.

    Science.gov (United States)

    Wang, Gang; Wang, Fei; Chen, Tao; Yeung, Dit-Yan; Lochovsky, Frederick H

    2012-04-01

    Traditional learning algorithms use only labeled data for training. However, labeled examples are often difficult or time consuming to obtain since they require substantial human labeling efforts. On the other hand, unlabeled data are often relatively easy to collect. Semisupervised learning addresses this problem by using large quantities of unlabeled data with labeled data to build better learning algorithms. In this paper, we use the manifold regularization approach to formulate the semisupervised learning problem where a regularization framework which balances a tradeoff between loss and penalty is established. We investigate different implementations of the loss function and identify the methods which have the least computational expense. The regularization hyperparameter, which determines the balance between loss and penalty, is crucial to model selection. Accordingly, we derive an algorithm that can fit the entire path of solutions for every value of the hyperparameter. Its computational complexity after preprocessing is quadratic only in the number of labeled examples rather than the total number of labeled and unlabeled examples.

  8. Regular and conformal regular cores for static and rotating solutions

    Energy Technology Data Exchange (ETDEWEB)

    Azreg-Aïnou, Mustapha

    2014-03-07

    Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.

  9. Regular and conformal regular cores for static and rotating solutions

    International Nuclear Information System (INIS)

    Azreg-Aïnou, Mustapha

    2014-01-01

    Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.

  10. Regularized forecasting of chaotic dynamical systems

    International Nuclear Information System (INIS)

    Bollt, Erik M.

    2017-01-01

    While local models of dynamical systems have been highly successful in terms of using extensive data sets observing even a chaotic dynamical system to produce useful forecasts, there is a typical problem as follows. Specifically, with k-near neighbors, kNN method, local observations occur due to recurrences in a chaotic system, and this allows for local models to be built by regression to low dimensional polynomial approximations of the underlying system estimating a Taylor series. This has been a popular approach, particularly in context of scalar data observations which have been represented by time-delay embedding methods. However such local models can generally allow for spatial discontinuities of forecasts when considered globally, meaning jumps in predictions because the collected near neighbors vary from point to point. The source of these discontinuities is generally that the set of near neighbors varies discontinuously with respect to the position of the sample point, and so therefore does the model built from the near neighbors. It is possible to utilize local information inferred from near neighbors as usual but at the same time to impose a degree of regularity on a global scale. We present here a new global perspective extending the general local modeling concept. In so doing, then we proceed to show how this perspective allows us to impose prior presumed regularity into the model, by involving the Tikhonov regularity theory, since this classic perspective of optimization in ill-posed problems naturally balances fitting an objective with some prior assumed form of the result, such as continuity or derivative regularity for example. This all reduces to matrix manipulations which we demonstrate on a simple data set, with the implication that it may find much broader context.

  11. Energy functions for regularization algorithms

    Science.gov (United States)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  12. Physical model of dimensional regularization

    Energy Technology Data Exchange (ETDEWEB)

    Schonfeld, Jonathan F.

    2016-12-15

    We explicitly construct fractals of dimension 4-ε on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity. (orig.)

  13. Regularized strings with extrinsic curvature

    International Nuclear Information System (INIS)

    Ambjoern, J.; Durhuus, B.

    1987-07-01

    We analyze models of discretized string theories, where the path integral over world sheet variables is regularized by summing over triangulated surfaces. The inclusion of curvature in the action is a necessity for the scaling of the string tension. We discuss the physical properties of models with extrinsic curvature terms in the action and show that the string tension vanishes at the critical point where the bare extrinsic curvature coupling tends to infinity. Similar results are derived for models with intrinsic curvature. (orig.)

  14. Circuit complexity of regular languages

    Czech Academy of Sciences Publication Activity Database

    Koucký, Michal

    2009-01-01

    Roč. 45, č. 4 (2009), s. 865-879 ISSN 1432-4350 R&D Projects: GA ČR GP201/07/P276; GA MŠk(CZ) 1M0545 Institutional research plan: CEZ:AV0Z10190503 Keywords : regular languages * circuit complexity * upper and lower bounds Subject RIV: BA - General Mathematics Impact factor: 0.726, year: 2009

  15. Quantitative Evaluation of Temporal Regularizers in Compressed Sensing Dynamic Contrast Enhanced MRI of the Breast

    Directory of Open Access Journals (Sweden)

    Dong Wang

    2017-01-01

    Full Text Available Purpose. Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI is used in cancer imaging to probe tumor vascular properties. Compressed sensing (CS theory makes it possible to recover MR images from randomly undersampled k-space data using nonlinear recovery schemes. The purpose of this paper is to quantitatively evaluate common temporal sparsity-promoting regularizers for CS DCE-MRI of the breast. Methods. We considered five ubiquitous temporal regularizers on 4.5x retrospectively undersampled Cartesian in vivo breast DCE-MRI data: Fourier transform (FT, Haar wavelet transform (WT, total variation (TV, second-order total generalized variation (TGVα2, and nuclear norm (NN. We measured the signal-to-error ratio (SER of the reconstructed images, the error in tumor mean, and concordance correlation coefficients (CCCs of the derived pharmacokinetic parameters Ktrans (volume transfer constant and ve (extravascular-extracellular volume fraction across a population of random sampling schemes. Results. NN produced the lowest image error (SER: 29.1, while TV/TGVα2 produced the most accurate Ktrans (CCC: 0.974/0.974 and ve (CCC: 0.916/0.917. WT produced the highest image error (SER: 21.8, while FT produced the least accurate Ktrans (CCC: 0.842 and ve (CCC: 0.799. Conclusion. TV/TGVα2 should be used as temporal constraints for CS DCE-MRI of the breast.

  16. Phase reconstruction from velocity-encoded MRI measurements – A survey of sparsity-promoting variational approaches

    KAUST Repository

    Benning, Martin; Gladden, Lynn; Holland, Daniel; Schö nlieb, Carola-Bibiane; Valkonen, Tuomo

    2014-01-01

    for the reconstruction of phase-encoded magnetic resonance velocity images from sub-sampled k-space data. We are particularly interested in regularisers that correctly treat both smooth and geometric features of the image. These features are common to velocity imaging

  17. MAP-MRF-Based Super-Resolution Reconstruction Approach for Coded Aperture Compressive Temporal Imaging

    Directory of Open Access Journals (Sweden)

    Tinghua Zhang

    2018-02-01

    Full Text Available Coded Aperture Compressive Temporal Imaging (CACTI can afford low-cost temporal super-resolution (SR, but limits are imposed by noise and compression ratio on reconstruction quality. To utilize inter-frame redundant information from multiple observations and sparsity in multi-transform domains, a robust reconstruction approach based on maximum a posteriori probability and Markov random field (MAP-MRF model for CACTI is proposed. The proposed approach adopts a weighted 3D neighbor system (WNS and the coordinate descent method to perform joint estimation of model parameters, to achieve the robust super-resolution reconstruction. The proposed multi-reconstruction algorithm considers both total variation (TV and ℓ 2 , 1 norm in wavelet domain to address the minimization problem for compressive sensing, and solves it using an accelerated generalized alternating projection algorithm. The weighting coefficient for different regularizations and frames is resolved by the motion characteristics of pixels. The proposed approach can provide high visual quality in the foreground and background of a scene simultaneously and enhance the fidelity of the reconstruction results. Simulation results have verified the efficacy of our new optimization framework and the proposed reconstruction approach.

  18. General inverse problems for regular variation

    DEFF Research Database (Denmark)

    Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan

    2014-01-01

    Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...

  19. Constrained least squares regularization in PET

    International Nuclear Information System (INIS)

    Choudhury, K.R.; O'Sullivan, F.O.

    1996-01-01

    Standard reconstruction methods used in tomography produce images with undesirable negative artifacts in background and in areas of high local contrast. While sophisticated statistical reconstruction methods can be devised to correct for these artifacts, their computational implementation is excessive for routine operational use. This work describes a technique for rapid computation of approximate constrained least squares regularization estimates. The unique feature of the approach is that it involves no iterative projection or backprojection steps. This contrasts with the familiar computationally intensive algorithms based on algebraic reconstruction (ART) or expectation-maximization (EM) methods. Experimentation with the new approach for deconvolution and mixture analysis shows that the root mean square error quality of estimators based on the proposed algorithm matches and usually dominates that of more elaborate maximum likelihood, at a fraction of the computational effort

  20. Exploiting sparsity and low-rank structure for the recovery of multi-slice breast MRIs with reduced sampling error.

    Science.gov (United States)

    Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D

    2012-09-01

    It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed

  1. Regularized Statistical Analysis of Anatomy

    DEFF Research Database (Denmark)

    Sjöstrand, Karl

    2007-01-01

    This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus...... and mind. Statistics represents a quintessential part of such investigations as they are preluded by a clinical hypothesis that must be verified based on observed data. The massive amounts of image data produced in each examination pose an important and interesting statistical challenge...... efficient algorithms which make the analysis of large data sets feasible, and gives examples of applications....

  2. Academic Training Lecture - Regular Programme

    CERN Multimedia

    PH Department

    2011-01-01

    Regular Lecture Programme 9 May 2011 ACT Lectures on Detectors - Inner Tracking Detectors by Pippa Wells (CERN) 10 May 2011 ACT Lectures on Detectors - Calorimeters (2/5) by Philippe Bloch (CERN) 11 May 2011 ACT Lectures on Detectors - Muon systems (3/5) by Kerstin Hoepfner (RWTH Aachen) 12 May 2011 ACT Lectures on Detectors - Particle Identification and Forward Detectors by Peter Krizan (University of Ljubljana and J. Stefan Institute, Ljubljana, Slovenia) 13 May 2011 ACT Lectures on Detectors - Trigger and Data Acquisition (5/5) by Dr. Brian Petersen (CERN) from 11:00 to 12:00 at CERN ( Bldg. 222-R-001 - Filtration Plant )

  3. SU-G-IeP1-13: Sub-Nyquist Dynamic MRI Via Prior Rank, Intensity and Sparsity Model (PRISM)

    International Nuclear Information System (INIS)

    Jiang, B; Gao, H

    2016-01-01

    Purpose: Accelerated dynamic MRI is important for MRI guided radiotherapy. Inspired by compressive sensing (CS), sub-Nyquist dynamic MRI has been an active research area, i.e., sparse sampling in k-t space for accelerated dynamic MRI. This work is to investigate sub-Nyquist dynamic MRI via a previously developed CS model, namely Prior Rank, Intensity and Sparsity Model (PRISM). Methods: The proposed method utilizes PRISM with rank minimization and incoherent sampling patterns for sub-Nyquist reconstruction. In PRISM, the low-rank background image, which is automatically calculated by rank minimization, is excluded from the L1 minimization step of the CS reconstruction to further sparsify the residual image, thus allowing for higher acceleration rates. Furthermore, the sampling pattern in k-t space is made more incoherent by sampling a different set of k-space points at different temporal frames. Results: Reconstruction results from L1-sparsity method and PRISM method with 30% undersampled data and 15% undersampled data are compared to demonstrate the power of PRISM for dynamic MRI. Conclusion: A sub- Nyquist MRI reconstruction method based on PRISM is developed with improved image quality from the L1-sparsity method.

  4. Joint sparsity based heterogeneous data-level fusion for target detection and estimation

    Science.gov (United States)

    Niu, Ruixin; Zulch, Peter; Distasio, Marcello; Blasch, Erik; Shen, Dan; Chen, Genshe

    2017-05-01

    Typical surveillance systems employ decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal and incur information loss. In this paper, we investigate data-level heterogeneous sensor fusion. Since the sensors monitor the common targets of interest, whose states can be determined by only a few parameters, it is reasonable to assume that the measurement domain has a low intrinsic dimensionality. For heterogeneous sensor data, we develop a joint-sparse data-level fusion (JSDLF) approach based on the emerging joint sparse signal recovery techniques by discretizing the target state space. This approach is applied to fuse signals from multiple distributed radio frequency (RF) signal sensors and a video camera for joint target detection and state estimation. The JSDLF approach is data-driven and requires minimum prior information, since there is no need to know the time-varying RF signal amplitudes, or the image intensity of the targets. It can handle non-linearity in the sensor data due to state space discretization and the use of frequency/pixel selection matrices. Furthermore, for a multi-target case with J targets, the JSDLF approach only requires discretization in a single-target state space, instead of discretization in a J-target state space, as in the case of the generalized likelihood ratio test (GLRT) or the maximum likelihood estimator (MLE). Numerical examples are provided to demonstrate that the proposed JSDLF approach achieves excellent performance with near real-time accurate target position and velocity estimates.

  5. Geostatistical regularization operators for geophysical inverse problems on irregular meshes

    Science.gov (United States)

    Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA

    2018-05-01

    Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.

  6. RES: Regularized Stochastic BFGS Algorithm

    Science.gov (United States)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  7. Regularized Label Relaxation Linear Regression.

    Science.gov (United States)

    Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu

    2018-04-01

    Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.

  8. Reconstruction of Undersampled Big Dynamic MRI Data Using Non-Convex Low-Rank and Sparsity Constraints

    Directory of Open Access Journals (Sweden)

    Ryan Wen Liu

    2017-03-01

    Full Text Available Dynamic magnetic resonance imaging (MRI has been extensively utilized for enhancing medical living environment visualization, however, in clinical practice it often suffers from long data acquisition times. Dynamic imaging essentially reconstructs the visual image from raw (k,t-space measurements, commonly referred to as big data. The purpose of this work is to accelerate big medical data acquisition in dynamic MRI by developing a non-convex minimization framework. In particular, to overcome the inherent speed limitation, both non-convex low-rank and sparsity constraints were combined to accelerate the dynamic imaging. However, the non-convex constraints make the dynamic reconstruction problem difficult to directly solve through the commonly-used numerical methods. To guarantee solution efficiency and stability, a numerical algorithm based on Alternating Direction Method of Multipliers (ADMM is proposed to solve the resulting non-convex optimization problem. ADMM decomposes the original complex optimization problem into several simple sub-problems. Each sub-problem has a closed-form solution or could be efficiently solved using existing numerical methods. It has been proven that the quality of images reconstructed from fewer measurements can be significantly improved using non-convex minimization. Numerous experiments have been conducted on two in vivo cardiac datasets to compare the proposed method with several state-of-the-art imaging methods. Experimental results illustrated that the proposed method could guarantee the superior imaging performance in terms of quantitative and visual image quality assessments.

  9. EIT image reconstruction with four dimensional regularization.

    Science.gov (United States)

    Dai, Tao; Soleimani, Manuchehr; Adler, Andy

    2008-09-01

    Electrical impedance tomography (EIT) reconstructs internal impedance images of the body from electrical measurements on body surface. The temporal resolution of EIT data can be very high, although the spatial resolution of the images is relatively low. Most EIT reconstruction algorithms calculate images from data frames independently, although data are actually highly correlated especially in high speed EIT systems. This paper proposes a 4-D EIT image reconstruction for functional EIT. The new approach is developed to directly use prior models of the temporal correlations among images and 3-D spatial correlations among image elements. A fast algorithm is also developed to reconstruct the regularized images. Image reconstruction is posed in terms of an augmented image and measurement vector which are concatenated from a specific number of previous and future frames. The reconstruction is then based on an augmented regularization matrix which reflects the a priori constraints on temporal and 3-D spatial correlations of image elements. A temporal factor reflecting the relative strength of the image correlation is objectively calculated from measurement data. Results show that image reconstruction models which account for inter-element correlations, in both space and time, show improved resolution and noise performance, in comparison to simpler image models.

  10. From inactive to regular jogger

    DEFF Research Database (Denmark)

    Lund-Cramer, Pernille; Brinkmann Løite, Vibeke; Bredahl, Thomas Viskum Gjelstrup

    study was conducted using individual semi-structured interviews on how a successful long-term behavior change had been achieved. Ten informants were purposely selected from participants in the DANO-RUN research project (7 men, 3 women, average age 41.5). Interviews were performed on the basis of Theory...... of Planned Behavior (TPB) and The Transtheoretical Model (TTM). Coding and analysis of interviews were performed using NVivo 10 software. Results TPB: During the behavior change process, the intention to jogging shifted from a focus on weight loss and improved fitness to both physical health, psychological......Title From inactive to regular jogger - a qualitative study of achieved behavioral change among recreational joggers Authors Pernille Lund-Cramer & Vibeke Brinkmann Løite Purpose Despite extensive knowledge of barriers to physical activity, most interventions promoting physical activity have proven...

  11. Tessellating the Sphere with Regular Polygons

    Science.gov (United States)

    Soto-Johnson, Hortensia; Bechthold, Dawn

    2004-01-01

    Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.

  12. On the equivalence of different regularization methods

    International Nuclear Information System (INIS)

    Brzezowski, S.

    1985-01-01

    The R-circunflex-operation preceded by the regularization procedure is discussed. Some arguments are given, according to which the results may depend on the method of regularization, introduced in order to avoid divergences in perturbation calculations. 10 refs. (author)

  13. The uniqueness of the regularization procedure

    International Nuclear Information System (INIS)

    Brzezowski, S.

    1981-01-01

    On the grounds of the BPHZ procedure, the criteria of correct regularization in perturbation calculations of QFT are given, together with the prescription for dividing the regularized formulas into the finite and infinite parts. (author)

  14. Manifold regularized multi-task feature selection for multi-modality classification in Alzheimer's disease.

    Science.gov (United States)

    Jie, Biao; Zhang, Daoqiang; Cheng, Bo; Shen, Dinggang

    2013-01-01

    Accurate diagnosis of Alzheimer's disease (AD), as well as its prodromal stage (i.e., mild cognitive impairment, MCI), is very important for possible delay and early treatment of the disease. Recently, multi-modality methods have been used for fusing information from multiple different and complementary imaging and non-imaging modalities. Although there are a number of existing multi-modality methods, few of them have addressed the problem of joint identification of disease-related brain regions from multi-modality data for classification. In this paper, we proposed a manifold regularized multi-task learning framework to jointly select features from multi-modality data. Specifically, we formulate the multi-modality classification as a multi-task learning framework, where each task focuses on the classification based on each modality. In order to capture the intrinsic relatedness among multiple tasks (i.e., modalities), we adopted a group sparsity regularizer, which ensures only a small number of features to be selected jointly. In addition, we introduced a new manifold based Laplacian regularization term to preserve the geometric distribution of original data from each task, which can lead to the selection of more discriminative features. Furthermore, we extend our method to the semi-supervised setting, which is very important since the acquisition of a large set of labeled data (i.e., diagnosis of disease) is usually expensive and time-consuming, while the collection of unlabeled data is relatively much easier. To validate our method, we have performed extensive evaluations on the baseline Magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET) data of Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our experimental results demonstrate the effectiveness of the proposed method.

  15. The Regularity of Optimal Irrigation Patterns

    Science.gov (United States)

    Morel, Jean-Michel; Santambrogio, Filippo

    2010-02-01

    A branched structure is observable in draining and irrigation systems, in electric power supply systems, and in natural objects like blood vessels, the river basins or the trees. Recent approaches of these networks derive their branched structure from an energy functional whose essential feature is to favor wide routes. Given a flow s in a river, a road, a tube or a wire, the transportation cost per unit length is supposed in these models to be proportional to s α with 0 measure is the Lebesgue density on a smooth open set and the irrigating measure is a single source. In that case we prove that all branches of optimal irrigation trees satisfy an elliptic equation and that their curvature is a bounded measure. In consequence all branching points in the network have a tangent cone made of a finite number of segments, and all other points have a tangent. An explicit counterexample disproves these regularity properties for non-Lebesgue irrigated measures.

  16. Regular extensions of some classes of grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    Culik and Cohen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this report we consider the analogous extension of the LL(k) grammers, called the LL-regular grammars. The relations of this class of grammars to other classes of grammars are shown. Every LL-regular

  17. Selecting protein families for environmental features based on manifold regularization.

    Science.gov (United States)

    Jiang, Xingpeng; Xu, Weiwei; Park, E K; Li, Guangrong

    2014-06-01

    Recently, statistics and machine learning have been developed to identify functional or taxonomic features of environmental features or physiological status. Important proteins (or other functional and taxonomic entities) to environmental features can be potentially used as biosensors. A major challenge is how the distribution of protein and gene functions embodies the adaption of microbial communities across environments and host habitats. In this paper, we propose a novel regularization method for linear regression to adapt the challenge. The approach is inspired by local linear embedding (LLE) and we call it a manifold-constrained regularization for linear regression (McRe). The novel regularization procedure also has potential to be used in solving other linear systems. We demonstrate the efficiency and the performance of the approach in both simulation and real data.

  18. Dimensional regularization in position space and a forest formula for regularized Epstein-Glaser renormalization

    International Nuclear Information System (INIS)

    Keller, Kai Johannes

    2010-04-01

    The present work contains a consistent formulation of the methods of dimensional regularization (DimReg) and minimal subtraction (MS) in Minkowski position space. The methods are implemented into the framework of perturbative Algebraic Quantum Field Theory (pAQFT). The developed methods are used to solve the Epstein-Glaser recursion for the construction of time-ordered products in all orders of causal perturbation theory. A solution is given in terms of a forest formula in the sense of Zimmermann. A relation to the alternative approach to renormalization theory using Hopf algebras is established. (orig.)

  19. Dimensional regularization in position space and a forest formula for regularized Epstein-Glaser renormalization

    Energy Technology Data Exchange (ETDEWEB)

    Keller, Kai Johannes

    2010-04-15

    The present work contains a consistent formulation of the methods of dimensional regularization (DimReg) and minimal subtraction (MS) in Minkowski position space. The methods are implemented into the framework of perturbative Algebraic Quantum Field Theory (pAQFT). The developed methods are used to solve the Epstein-Glaser recursion for the construction of time-ordered products in all orders of causal perturbation theory. A solution is given in terms of a forest formula in the sense of Zimmermann. A relation to the alternative approach to renormalization theory using Hopf algebras is established. (orig.)

  20. Class of regular bouncing cosmologies

    Science.gov (United States)

    Vasilić, Milovan

    2017-06-01

    In this paper, I construct a class of everywhere regular geometric sigma models that possess bouncing solutions. Precisely, I show that every bouncing metric can be made a solution of such a model. My previous attempt to do so by employing one scalar field has failed due to the appearance of harmful singularities near the bounce. In this work, I use four scalar fields to construct a class of geometric sigma models which are free of singularities. The models within the class are parametrized by their background geometries. I prove that, whatever background is chosen, the dynamics of its small perturbations is classically stable on the whole time axis. Contrary to what one expects from the structure of the initial Lagrangian, the physics of background fluctuations is found to carry two tensor, two vector, and two scalar degrees of freedom. The graviton mass, which naturally appears in these models, is shown to be several orders of magnitude smaller than its experimental bound. I provide three simple examples to demonstrate how this is done in practice. In particular, I show that graviton mass can be made arbitrarily small.

  1. X-ray computed tomography using curvelet sparse regularization.

    Science.gov (United States)

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  2. Regular and stochastic particle motion in plasma dynamics

    International Nuclear Information System (INIS)

    Kaufman, A.N.

    1979-08-01

    A Hamiltonian formalism is presented for the study of charged-particle trajectories in the self-consistent field of the particles. The intention is to develop a general approach to plasma dynamics. Transformations of phase-space variables are used to separate out the regular, adiabatic motion from the irregular, stochastic trajectories. Several new techniques are included in this presentation

  3. Regularized Adaptive Notch Filters for Acoustic Howling Suppression

    DEFF Research Database (Denmark)

    Gil-Cacho, Pepe; van Waterschoot, Toon; Moonen, Marc

    2009-01-01

    In this paper, a method for the suppression of acoustic howling is developed, based on adaptive notch filters (ANF) with regularization (RANF). The method features three RANFs working in parallel to achieve frequency tracking, howling detection and suppression. The ANF-based approach to howling...

  4. An Adaptive Neural Mechanism for Acoustic Motion Perception with Varying Sparsity.

    Science.gov (United States)

    Shaikh, Danish; Manoonpong, Poramate

    2017-01-01

    Biological motion-sensitive neural circuits are quite adept in perceiving the relative motion of a relevant stimulus. Motion perception is a fundamental ability in neural sensory processing and crucial in target tracking tasks. Tracking a stimulus entails the ability to perceive its motion, i.e., extracting information about its direction and velocity. Here we focus on auditory motion perception of sound stimuli, which is poorly understood as compared to its visual counterpart. In earlier work we have developed a bio-inspired neural learning mechanism for acoustic motion perception. The mechanism extracts directional information via a model of the peripheral auditory system of lizards. The mechanism uses only this directional information obtained via specific motor behaviour to learn the angular velocity of unoccluded sound stimuli in motion. In nature however the stimulus being tracked may be occluded by artefacts in the environment, such as an escaping prey momentarily disappearing behind a cover of trees. This article extends the earlier work by presenting a comparative investigation of auditory motion perception for unoccluded and occluded tonal sound stimuli with a frequency of 2.2 kHz in both simulation and practice. Three instances of each stimulus are employed, differing in their movement velocities-0.5°/time step, 1.0°/time step and 1.5°/time step. To validate the approach in practice, we implement the proposed neural mechanism on a wheeled mobile robot and evaluate its performance in auditory tracking.

  5. l0 regularization based on a prior image incorporated non-local means for limited-angle X-ray CT reconstruction.

    Science.gov (United States)

    Zhang, Lingli; Zeng, Li; Guo, Yumeng

    2018-03-15

    Restricted by the scanning environment in some CT imaging modalities, the acquired projection data are usually incomplete, which may lead to a limited-angle reconstruction problem. Thus, image quality usually suffers from the slope artifacts. The objective of this study is to first investigate the distorted domains of the reconstructed images which encounter the slope artifacts and then present a new iterative reconstruction method to address the limited-angle X-ray CT reconstruction problem. The presented framework of new method exploits the structural similarity between the prior image and the reconstructed image aiming to compensate the distorted edges. Specifically, the new method utilizes l0 regularization and wavelet tight framelets to suppress the slope artifacts and pursue the sparsity. New method includes following 4 steps to (1) address the data fidelity using SART; (2) compensate for the slope artifacts due to the missed projection data using the prior image and modified nonlocal means (PNLM); (3) utilize l0 regularization to suppress the slope artifacts and pursue the sparsity of wavelet coefficients of the transformed image by using iterative hard thresholding (l0W); and (4) apply an inverse wavelet transform to reconstruct image. In summary, this method is referred to as "l0W-PNLM". Numerical implementations showed that the presented l0W-PNLM was superior to suppress the slope artifacts while preserving the edges of some features as compared to the commercial and other popular investigative algorithms. When the image to be reconstructed is inconsistent with the prior image, the new method can avoid or minimize the distorted edges in the reconstructed images. Quantitative assessments also showed that applying the new method obtained the highest image quality comparing to the existing algorithms. This study demonstrated that the presented l0W-PNLM yielded higher image quality due to a number of unique characteristics, which include that (1) it utilizes

  6. Electron paramagnetic resonance image reconstruction with total variation and curvelets regularization

    Science.gov (United States)

    Durand, Sylvain; Frapart, Yves-Michel; Kerebel, Maud

    2017-11-01

    Spatial electron paramagnetic resonance imaging (EPRI) is a recent method to localize and characterize free radicals in vivo or in vitro, leading to applications in material and biomedical sciences. To improve the quality of the reconstruction obtained by EPRI, a variational method is proposed to inverse the image formation model. It is based on a least-square data-fidelity term and the total variation and Besov seminorm for the regularization term. To fully comprehend the Besov seminorm, an implementation using the curvelet transform and the L 1 norm enforcing the sparsity is proposed. It allows our model to reconstruct both image where acquisition information are missing and image with details in textured areas, thus opening possibilities to reduce acquisition times. To implement the minimization problem using the algorithm developed by Chambolle and Pock, a thorough analysis of the direct model is undertaken and the latter is inverted while avoiding the use of filtered backprojection (FBP) and of non-uniform Fourier transform. Numerical experiments are carried out on simulated data, where the proposed model outperforms both visually and quantitatively the classical model using deconvolution and FBP. Improved reconstructions on real data, acquired on an irradiated distal phalanx, were successfully obtained.

  7. EEG/MEG Source Reconstruction with Spatial-Temporal Two-Way Regularized Regression

    KAUST Repository

    Tian, Tian Siva

    2013-07-11

    In this work, we propose a spatial-temporal two-way regularized regression method for reconstructing neural source signals from EEG/MEG time course measurements. The proposed method estimates the dipole locations and amplitudes simultaneously through minimizing a single penalized least squares criterion. The novelty of our methodology is the simultaneous consideration of three desirable properties of the reconstructed source signals, that is, spatial focality, spatial smoothness, and temporal smoothness. The desirable properties are achieved by using three separate penalty functions in the penalized regression framework. Specifically, we impose a roughness penalty in the temporal domain for temporal smoothness, and a sparsity-inducing penalty and a graph Laplacian penalty in the spatial domain for spatial focality and smoothness. We develop a computational efficient multilevel block coordinate descent algorithm to implement the method. Using a simulation study with several settings of different spatial complexity and two real MEG examples, we show that the proposed method outperforms existing methods that use only a subset of the three penalty functions. © 2013 Springer Science+Business Media New York.

  8. "Plug-and-play" edge-preserving regularization

    DEFF Research Database (Denmark)

    Chen, Donghui; Kilmer, Misha E.; Hansen, Per Christian

    2014-01-01

    In many inverse problems it is essential to use regularization methods that preserve edges in the reconstructions, and many reconstruction models have been developed for this task, such as the Total Variation (TV) approach. The associated algorithms are complex and require a good knowledge of large...... cosine transform.hence the term "plug-and-play" . We do not attempt to improve on TV reconstructions, but rather provide an easy-to-use approach to computing reconstructions with similar properties....

  9. Sparse reconstruction by means of the standard Tikhonov regularization

    International Nuclear Information System (INIS)

    Lu Shuai; Pereverzev, Sergei V

    2008-01-01

    It is a common belief that Tikhonov scheme with || · ||L 2 -penalty fails in sparse reconstruction. We are going to show, however, that this standard regularization can help if the stability measured in L 1 -norm will be properly taken into account in the choice of the regularization parameter. The crucial point is that now a stability bound may depend on the bases with respect to which the solution of the problem is assumed to be sparse. We discuss how such a stability can be estimated numerically and present the results of computational experiments giving the evidence of the reliability of our approach.

  10. Manifold Regularized Experimental Design for Active Learning.

    Science.gov (United States)

    Zhang, Lining; Shum, Hubert P H; Shao, Ling

    2016-12-02

    Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.

  11. Regularities development of entrepreneurial structures in regions

    Directory of Open Access Journals (Sweden)

    Julia Semenovna Pinkovetskaya

    2012-12-01

    Full Text Available Consider regularities and tendencies for the three types of entrepreneurial structures — small enterprises, medium enterprises and individual entrepreneurs. The aim of the research was to confirm the possibilities of describing indicators of aggregate entrepreneurial structures with the use of normal law distribution functions. Presented proposed by the author the methodological approach and results of construction of the functions of the density distribution for the main indicators for the various objects: the Russian Federation, regions, as well as aggregates ofentrepreneurial structures, specialized in certain forms ofeconomic activity. All the developed functions, as shown by the logical and statistical analysis, are of high quality and well-approximate the original data. In general, the proposed methodological approach is versatile and can be used in further studies of aggregates of entrepreneurial structures. The received results can be applied in solving a wide range of problems justify the need for personnel and financial resources at the federal, regional and municipal levels, as well as the formation of plans and forecasts of development entrepreneurship and improvement of this sector of the economy.

  12. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie [Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing 100124 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China) and School of Life Sciences and Technology, Xidian University, Xi' an 710071 (China)

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used

  13. Adaptive regularization of noisy linear inverse problems

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue

    2006-01-01

    In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....

  14. Higher derivative regularization and chiral anomaly

    International Nuclear Information System (INIS)

    Nagahama, Yoshinori.

    1985-02-01

    A higher derivative regularization which automatically leads to the consistent chiral anomaly is analyzed in detail. It explicitly breaks all the local gauge symmetry but preserves global chiral symmetry and leads to the chirally symmetric consistent anomaly. This regularization thus clarifies the physics content contained in the consistent anomaly. We also briefly comment on the application of this higher derivative regularization to massless QED. (author)

  15. Regularity effect in prospective memory during aging

    Directory of Open Access Journals (Sweden)

    Geoffrey Blondelle

    2016-10-01

    Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical

  16. Regularity effect in prospective memory during aging

    OpenAIRE

    Blondelle, Geoffrey; Hainselin, Mathieu; Gounden, Yannick; Heurley, Laurent; Voisin, Hélène; Megalakaki, Olga; Bressous, Estelle; Quaglino, Véronique

    2016-01-01

    Background: Regularity effect can affect performance in prospective memory (PM), but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults.Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30), 1...

  17. Gamma regularization based reconstruction for low dose CT

    International Nuclear Information System (INIS)

    Zhang, Junfeng; Chen, Yang; Hu, Yining; Luo, Limin; Shu, Huazhong; Li, Bicao; Liu, Jin; Coatrieux, Jean-Louis

    2015-01-01

    Reducing the radiation in computerized tomography is today a major concern in radiology. Low dose computerized tomography (LDCT) offers a sound way to deal with this problem. However, more severe noise in the reconstructed CT images is observed under low dose scan protocols (e.g. lowered tube current or voltage values). In this paper we propose a Gamma regularization based algorithm for LDCT image reconstruction. This solution is flexible and provides a good balance between the regularizations based on l 0 -norm and l 1 -norm. We evaluate the proposed approach using the projection data from simulated phantoms and scanned Catphan phantoms. Qualitative and quantitative results show that the Gamma regularization based reconstruction can perform better in both edge-preserving and noise suppression when compared with other norms. (paper)

  18. Iterative Regularization with Minimum-Residual Methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2007-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  19. Iterative regularization with minimum-residual methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2006-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  20. SPARK: Sparsity-based analysis of reliable k-hubness and overlapping network structure in brain functional connectivity.

    Science.gov (United States)

    Lee, Kangjoo; Lina, Jean-Marc; Gotman, Jean; Grova, Christophe

    2016-07-01

    Functional hubs are defined as the specific brain regions with dense connections to other regions in a functional brain network. Among them, connector hubs are of great interests, as they are assumed to promote global and hierarchical communications between functionally specialized networks. Damage to connector hubs may have a more crucial effect on the system than does damage to other hubs. Hubs in graph theory are often identified from a correlation matrix, and classified as connector hubs when the hubs are more connected to regions in other networks than within the networks to which they belong. However, the identification of hubs from functional data is more complex than that from structural data, notably because of the inherent problem of multicollinearity between temporal dynamics within a functional network. In this context, we developed and validated a method to reliably identify connectors and corresponding overlapping network structure from resting-state fMRI. This new method is actually handling the multicollinearity issue, since it does not rely on counting the number of connections from a thresholded correlation matrix. The novelty of the proposed method is that besides counting the number of networks involved in each voxel, it allows us to identify which networks are actually involved in each voxel, using a data-driven sparse general linear model in order to identify brain regions involved in more than one network. Moreover, we added a bootstrap resampling strategy to assess statistically the reproducibility of our results at the single subject level. The unified framework is called SPARK, i.e. SParsity-based Analysis of Reliable k-hubness, where k-hubness denotes the number of networks overlapping in each voxel. The accuracy and robustness of SPARK were evaluated using two dimensional box simulations and realistic simulations that examined detection of artificial hubs generated on real data. Then, test/retest reliability of the method was assessed

  1. Sparsity Invariant CNNs

    OpenAIRE

    Uhrig, Jonas; Schneider, Nick; Schneider, Lukas; Franke, Uwe; Brox, Thomas; Geiger, Andreas

    2017-01-01

    In this paper, we consider convolutional neural networks operating on sparse inputs with an application to depth upsampling from sparse laser scan data. First, we show that traditional convolutional networks perform poorly when applied to sparse data even when the location of missing data is provided to the network. To overcome this problem, we propose a simple yet effective sparse convolution layer which explicitly considers the location of missing data during the convolution operation. We d...

  2. SART-Type Half-Threshold Filtering Approach for CT Reconstruction.

    Science.gov (United States)

    Yu, Hengyong; Wang, Ge

    2014-01-01

    The [Formula: see text] regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the [Formula: see text] norm (0 < p < 1) and solve the [Formula: see text] minimization problem. Very recently, Xu et al. developed an analytic solution for the [Formula: see text] regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering.

  3. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-01-01

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  4. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-04-19

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  5. On infinite regular and chiral maps

    OpenAIRE

    Arredondo, John A.; Valdez, Camilo Ramírez y Ferrán

    2015-01-01

    We prove that infinite regular and chiral maps take place on surfaces with at most one end. Moreover, we prove that an infinite regular or chiral map on an orientable surface with genus can only be realized on the Loch Ness monster, that is, the topological surface of infinite genus with one end.

  6. From recreational to regular drug use

    DEFF Research Database (Denmark)

    Järvinen, Margaretha; Ravn, Signe

    2011-01-01

    This article analyses the process of going from recreational use to regular and problematic use of illegal drugs. We present a model containing six career contingencies relevant for young people’s progress from recreational to regular drug use: the closing of social networks, changes in forms...

  7. Automating InDesign with Regular Expressions

    CERN Document Server

    Kahrel, Peter

    2006-01-01

    If you need to make automated changes to InDesign documents beyond what basic search and replace can handle, you need regular expressions, and a bit of scripting to make them work. This Short Cut explains both how to write regular expressions, so you can find and replace the right things, and how to use them in InDesign specifically.

  8. 29 CFR 779.18 - Regular rate.

    Science.gov (United States)

    2010-07-01

    ... employee under subsection (a) or in excess of the employee's normal working hours or regular working hours... Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL POLICY OR... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...

  9. Regularization of the Boundary-Saddle-Node Bifurcation

    Directory of Open Access Journals (Sweden)

    Xia Liu

    2018-01-01

    Full Text Available In this paper we treat a particular class of planar Filippov systems which consist of two smooth systems that are separated by a discontinuity boundary. In such systems one vector field undergoes a saddle-node bifurcation while the other vector field is transversal to the boundary. The boundary-saddle-node (BSN bifurcation occurs at a critical value when the saddle-node point is located on the discontinuity boundary. We derive a local topological normal form for the BSN bifurcation and study its local dynamics by applying the classical Filippov’s convex method and a novel regularization approach. In fact, by the regularization approach a given Filippov system is approximated by a piecewise-smooth continuous system. Moreover, the regularization process produces a singular perturbation problem where the original discontinuous set becomes a center manifold. Thus, the regularization enables us to make use of the established theories for continuous systems and slow-fast systems to study the local behavior around the BSN bifurcation.

  10. Direction-of-arrival estimation for co-located multiple-input multiple-output radar using structural sparsity Bayesian learning

    International Nuclear Information System (INIS)

    Wen Fang-Qing; Zhang Gong; Ben De

    2015-01-01

    This paper addresses the direction of arrival (DOA) estimation problem for the co-located multiple-input multiple-output (MIMO) radar with random arrays. The spatially distributed sparsity of the targets in the background makes compressive sensing (CS) desirable for DOA estimation. A spatial CS framework is presented, which links the DOA estimation problem to support recovery from a known over-complete dictionary. A modified statistical model is developed to accurately represent the intra-block correlation of the received signal. A structural sparsity Bayesian learning algorithm is proposed for the sparse recovery problem. The proposed algorithm, which exploits intra-signal correlation, is capable being applied to limited data support and low signal-to-noise ratio (SNR) scene. Furthermore, the proposed algorithm has less computation load compared to the classical Bayesian algorithm. Simulation results show that the proposed algorithm has a more accurate DOA estimation than the traditional multiple signal classification (MUSIC) algorithm and other CS recovery algorithms. (paper)

  11. Direction-of-arrival estimation for co-located multiple-input multiple-output radar using structural sparsity Bayesian learning

    Science.gov (United States)

    Wen, Fang-Qing; Zhang, Gong; Ben, De

    2015-11-01

    This paper addresses the direction of arrival (DOA) estimation problem for the co-located multiple-input multiple-output (MIMO) radar with random arrays. The spatially distributed sparsity of the targets in the background makes compressive sensing (CS) desirable for DOA estimation. A spatial CS framework is presented, which links the DOA estimation problem to support recovery from a known over-complete dictionary. A modified statistical model is developed to accurately represent the intra-block correlation of the received signal. A structural sparsity Bayesian learning algorithm is proposed for the sparse recovery problem. The proposed algorithm, which exploits intra-signal correlation, is capable being applied to limited data support and low signal-to-noise ratio (SNR) scene. Furthermore, the proposed algorithm has less computation load compared to the classical Bayesian algorithm. Simulation results show that the proposed algorithm has a more accurate DOA estimation than the traditional multiple signal classification (MUSIC) algorithm and other CS recovery algorithms. Project supported by the National Natural Science Foundation of China (Grant Nos. 61071163, 61271327, and 61471191), the Funding for Outstanding Doctoral Dissertation in Nanjing University of Aeronautics and Astronautics, China (Grant No. BCXJ14-08), the Funding of Innovation Program for Graduate Education of Jiangsu Province, China (Grant No. KYLX 0277), the Fundamental Research Funds for the Central Universities, China (Grant No. 3082015NP2015504), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PADA), China.

  12. Iterative regularization in intensity-modulated radiation therapy optimization

    International Nuclear Information System (INIS)

    Carlsson, Fredrik; Forsgren, Anders

    2006-01-01

    A common way to solve intensity-modulated radiation therapy (IMRT) optimization problems is to use a beamlet-based approach. The approach is usually employed in a three-step manner: first a beamlet-weight optimization problem is solved, then the fluence profiles are converted into step-and-shoot segments, and finally postoptimization of the segment weights is performed. A drawback of beamlet-based approaches is that beamlet-weight optimization problems are ill-conditioned and have to be regularized in order to produce smooth fluence profiles that are suitable for conversion. The purpose of this paper is twofold: first, to explain the suitability of solving beamlet-based IMRT problems by a BFGS quasi-Newton sequential quadratic programming method with diagonal initial Hessian estimate, and second, to empirically show that beamlet-weight optimization problems should be solved in relatively few iterations when using this optimization method. The explanation of the suitability is based on viewing the optimization method as an iterative regularization method. In iterative regularization, the optimization problem is solved approximately by iterating long enough to obtain a solution close to the optimal one, but terminating before too much noise occurs. Iterative regularization requires an optimization method that initially proceeds in smooth directions and makes rapid initial progress. Solving ten beamlet-based IMRT problems with dose-volume objectives and bounds on the beamlet-weights, we find that the considered optimization method fulfills the requirements for performing iterative regularization. After segment-weight optimization, the treatments obtained using 35 beamlet-weight iterations outperform the treatments obtained using 100 beamlet-weight iterations, both in terms of objective value and of target uniformity. We conclude that iterating too long may in fact deteriorate the quality of the deliverable plan

  13. Project of moroccan regular framework

    International Nuclear Information System (INIS)

    Amarof, L.

    1988-11-01

    Like the practices adopted by diverse countries for the nuclear facilities regulations and following the IAEA safety guides, a preliminary authorization regime bearing on: site selection, construction, commissioning, operation and decommissioning of nuclear facilities is laid down by the decree project related to nuclear facilities control and authorization. The required conditions and the prescriptions imposing to the applicant for ann authorization are stated by this decree. The latter enables the ministry of Energy and Mines to establish its modes and procedures of application and to make the IAEA safety guides and codes applicable as there is no national technical regulation. Concerning the adoption of a technical regulation, particularly in selection sites field, the Moroccan approach is well advised for the prescription of specified technical standards. 3 figs., 6 refs. (F.M.)

  14. Quantum implications of a scale invariant regularization

    Science.gov (United States)

    Ghilencea, D. M.

    2018-04-01

    We study scale invariance at the quantum level in a perturbative approach. For a scale-invariant classical theory, the scalar potential is computed at a three-loop level while keeping manifest this symmetry. Spontaneous scale symmetry breaking is transmitted at a quantum level to the visible sector (of ϕ ) by the associated Goldstone mode (dilaton σ ), which enables a scale-invariant regularization and whose vacuum expectation value ⟨σ ⟩ generates the subtraction scale (μ ). While the hidden (σ ) and visible sector (ϕ ) are classically decoupled in d =4 due to an enhanced Poincaré symmetry, they interact through (a series of) evanescent couplings ∝ɛ , dictated by the scale invariance of the action in d =4 -2 ɛ . At the quantum level, these couplings generate new corrections to the potential, as scale-invariant nonpolynomial effective operators ϕ2 n +4/σ2 n. These are comparable in size to "standard" loop corrections and are important for values of ϕ close to ⟨σ ⟩. For n =1 , 2, the beta functions of their coefficient are computed at three loops. In the IR limit, dilaton fluctuations decouple, the effective operators are suppressed by large ⟨σ ⟩, and the effective potential becomes that of a renormalizable theory with explicit scale symmetry breaking by the DR scheme (of μ =constant).

  15. Multi-task feature learning by using trace norm regularization

    Directory of Open Access Journals (Sweden)

    Jiangmei Zhang

    2017-11-01

    Full Text Available Multi-task learning can extract the correlation of multiple related machine learning problems to improve performance. This paper considers applying the multi-task learning method to learn a single task. We propose a new learning approach, which employs the mixture of expert model to divide a learning task into several related sub-tasks, and then uses the trace norm regularization to extract common feature representation of these sub-tasks. A nonlinear extension of this approach by using kernel is also provided. Experiments conducted on both simulated and real data sets demonstrate the advantage of the proposed approach.

  16. Risk assessment of coumarin using the bench mark dose (BMD) approach: children in Norway which regularly eat oatmeal porridge with cinnamon may exceed the TDI for coumarin with several folds.

    Science.gov (United States)

    Fotland, T Ø; Paulsen, J E; Sanner, T; Alexander, J; Husøy, T

    2012-03-01

    Coumarin is a naturally occurring flavouring substance in cinnamon and many other plants. It is known that coumarin can cause liver toxicity in several species, and it is considered a non-genotoxic carcinogen in rodents. By using the bench mark dose approach we re-assessed coumarin toxicity and established a new TDI for coumarin of 0.07 mg/kg bw/day. Oral intake of coumarin is related to consumption of cinnamon-containing foods and food supplements. Cinnamon is a widely used spice in Norway, and can be used as topping on oatmeal porridge. Based on analyses of coumarin in Norwegian foods, intake calculations for children and adults were conducted, and a risk assessment of coumarin in the Norwegian population was performed. Intake estimates of coumarin show that small children eating oatmeal porridge several times a week sprinkled with cinnamon could have a coumarin intake of 1.63 mg/kg bw/day and may exceeding the TDI with several folds. Adults drinking cinnamon-based tea and consuming cinnamon supplements also can exceed TDI. The coumarin intake could exceed the TDI by 7- to 20-fold in some intake scenarios. Such large daily exceedances of TDI, even for a limited time period of 1-2 weeks, cause concern of adverse health effects. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Geosocial process and its regularities

    Science.gov (United States)

    Vikulina, Marina; Vikulin, Alexander; Dolgaya, Anna

    2015-04-01

    Natural disasters and social events (wars, revolutions, genocides, epidemics, fires, etc.) accompany each other throughout human civilization, thus reflecting the close relationship of these phenomena that are seemingly of different nature. In order to study this relationship authors compiled and analyzed the list of the 2,400 natural disasters and social phenomena weighted by their magnitude that occurred during the last XXXVI centuries of our history. Statistical analysis was performed separately for each aggregate (natural disasters and social phenomena), and for particular statistically representative types of events. There was 5 + 5 = 10 types. It is shown that the numbers of events in the list are distributed by logarithmic law: the bigger the event, the less likely it happens. For each type of events and each aggregate the existence of periodicities with periods of 280 ± 60 years was established. Statistical analysis of the time intervals between adjacent events for both aggregates showed good agreement with Weibull-Gnedenko distribution with shape parameter less than 1, which is equivalent to the conclusion about the grouping of events at small time intervals. Modeling of statistics of time intervals with Pareto distribution allowed to identify the emergent property for all events in the aggregate. This result allowed the authors to make conclusion about interaction between natural disasters and social phenomena. The list of events compiled by authors and first identified properties of cyclicity, grouping and interaction process reflected by this list is the basis of modeling essentially unified geosocial process at high enough statistical level. Proof of interaction between "lifeless" Nature and Society is fundamental and provided a new approach to forecasting demographic crises with taking into account both natural disasters and social phenomena.

  18. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-11-19

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  19. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-01-01

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  20. Hierarchical regular small-world networks

    International Nuclear Information System (INIS)

    Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan

    2008-01-01

    Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)

  1. Multiple graph regularized protein domain ranking.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  2. Coupling regularizes individual units in noisy populations

    International Nuclear Information System (INIS)

    Ly Cheng; Ermentrout, G. Bard

    2010-01-01

    The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.

  3. Multiple graph regularized protein domain ranking

    Directory of Open Access Journals (Sweden)

    Wang Jim

    2012-11-01

    Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  4. Iterated Process Analysis over Lattice-Valued Regular Expressions

    DEFF Research Database (Denmark)

    Midtgaard, Jan; Nielson, Flemming; Nielson, Hanne Riis

    2016-01-01

    We present an iterated approach to statically analyze programs of two processes communicating by message passing. Our analysis operates over a domain of lattice-valued regular expressions, and computes increasingly better approximations of each process's communication behavior. Overall the work e...... extends traditional semantics-based program analysis techniques to automatically reason about message passing in a manner that can simultaneously analyze both values of variables as well as message order, message content, and their interdependencies.......We present an iterated approach to statically analyze programs of two processes communicating by message passing. Our analysis operates over a domain of lattice-valued regular expressions, and computes increasingly better approximations of each process's communication behavior. Overall the work...

  5. Analysis of Logic Programs Using Regular Tree Languages

    DEFF Research Database (Denmark)

    Gallagher, John Patrick

    2012-01-01

    The eld of nite tree automata provides fundamental notations and tools for reasoning about set of terms called regular or recognizable tree languages. We consider two kinds of analysis using regular tree languages, applied to logic programs. The rst approach is to try to discover automatically...... a tree automaton from a logic program, approximating its minimal Herbrand model. In this case the input for the analysis is a program, and the output is a tree automaton. The second approach is to expose or check properties of the program that can be expressed by a given tree automaton. The input...... to the analysis is a program and a tree automaton, and the output is an abstract model of the program. These two contrasting abstract interpretations can be used in a wide range of analysis and verication problems....

  6. J-regular rings with injectivities

    OpenAIRE

    Shen, Liang

    2010-01-01

    A ring $R$ is called a J-regular ring if R/J(R) is von Neumann regular, where J(R) is the Jacobson radical of R. It is proved that if R is J-regular, then (i) R is right n-injective if and only if every homomorphism from an $n$-generated small right ideal of $R$ to $R_{R}$ can be extended to one from $R_{R}$ to $R_{R}$; (ii) R is right FP-injective if and only if R is right (J, R)-FP-injective. Some known results are improved.

  7. Regularized Partial Least Squares with an Application to NMR Spectroscopy

    OpenAIRE

    Allen, Genevera I.; Peterson, Christine; Vannucci, Marina; Maletic-Savatic, Mirjana

    2012-01-01

    High-dimensional data common in genomics, proteomics, and chemometrics often contains complicated correlation structures. Recently, partial least squares (PLS) and Sparse PLS methods have gained attention in these areas as dimension reduction techniques in the context of supervised data analysis. We introduce a framework for Regularized PLS by solving a relaxation of the SIMPLS optimization problem with penalties on the PLS loadings vectors. Our approach enjoys many advantages including flexi...

  8. (2+1-dimensional regular black holes with nonlinear electrodynamics sources

    Directory of Open Access Journals (Sweden)

    Yun He

    2017-11-01

    Full Text Available On the basis of two requirements: the avoidance of the curvature singularity and the Maxwell theory as the weak field limit of the nonlinear electrodynamics, we find two restricted conditions on the metric function of (2+1-dimensional regular black hole in general relativity coupled with nonlinear electrodynamics sources. By the use of the two conditions, we obtain a general approach to construct (2+1-dimensional regular black holes. In this manner, we construct four (2+1-dimensional regular black holes as examples. We also study the thermodynamic properties of the regular black holes and verify the first law of black hole thermodynamics.

  9. Centered Differential Waveform Inversion with Minimum Support Regularization

    KAUST Repository

    Kazei, Vladimir

    2017-05-26

    Time-lapse full-waveform inversion has two major challenges. The first one is the reconstruction of a reference model (baseline model for most of approaches). The second is inversion for the time-lapse changes in the parameters. Common model approach is utilizing the information contained in all available data sets to build a better reference model for time lapse inversion. Differential (Double-difference) waveform inversion allows to reduce the artifacts introduced into estimates of time-lapse parameter changes by imperfect inversion for the baseline-reference model. We propose centered differential waveform inversion (CDWI) which combines these two approaches in order to benefit from both of their features. We apply minimum support regularization commonly used with electromagnetic methods of geophysical exploration. We test the CDWI method on synthetic dataset with random noise and show that, with Minimum support regularization, it provides better resolution of velocity changes than with total variation and Tikhonov regularizations in time-lapse full-waveform inversion.

  10. Generalized regular genus for manifolds with boundary

    Directory of Open Access Journals (Sweden)

    Paola Cristofori

    2003-05-01

    Full Text Available We introduce a generalization of the regular genus, a combinatorial invariant of PL manifolds ([10], which is proved to be strictly related, in dimension three, to generalized Heegaard splittings defined in [12].

  11. Poisson image reconstruction with Hessian Schatten-norm regularization.

    Science.gov (United States)

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  12. Surface-based prostate registration with biomechanical regularization

    Science.gov (United States)

    van de Ven, Wendy J. M.; Hu, Yipeng; Barentsz, Jelle O.; Karssemeijer, Nico; Barratt, Dean; Huisman, Henkjan J.

    2013-03-01

    Adding MR-derived information to standard transrectal ultrasound (TRUS) images for guiding prostate biopsy is of substantial clinical interest. A tumor visible on MR images can be projected on ultrasound by using MRUS registration. A common approach is to use surface-based registration. We hypothesize that biomechanical modeling will better control deformation inside the prostate than a regular surface-based registration method. We developed a novel method by extending a surface-based registration with finite element (FE) simulation to better predict internal deformation of the prostate. For each of six patients, a tetrahedral mesh was constructed from the manual prostate segmentation. Next, the internal prostate deformation was simulated using the derived radial surface displacement as boundary condition. The deformation field within the gland was calculated using the predicted FE node displacements and thin-plate spline interpolation. We tested our method on MR guided MR biopsy imaging data, as landmarks can easily be identified on MR images. For evaluation of the registration accuracy we used 45 anatomical landmarks located in all regions of the prostate. Our results show that the median target registration error of a surface-based registration with biomechanical regularization is 1.88 mm, which is significantly different from 2.61 mm without biomechanical regularization. We can conclude that biomechanical FE modeling has the potential to improve the accuracy of multimodal prostate registration when comparing it to regular surface-based registration.

  13. Fast and compact regular expression matching

    DEFF Research Database (Denmark)

    Bille, Philip; Farach-Colton, Martin

    2008-01-01

    We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how...... to improve the space and/or remove a dependency on the alphabet size for each problem using either an improved tabulation technique of an existing algorithm or by combining known algorithms in a new way....

  14. Regular-fat dairy and human health

    DEFF Research Database (Denmark)

    Astrup, Arne; Bradley, Beth H Rice; Brenna, J Thomas

    2016-01-01

    In recent history, some dietary recommendations have treated dairy fat as an unnecessary source of calories and saturated fat in the human diet. These assumptions, however, have recently been brought into question by current research on regular fat dairy products and human health. In an effort to......, cheese and yogurt, can be important components of an overall healthy dietary pattern. Systematic examination of the effects of dietary patterns that include regular-fat milk, cheese and yogurt on human health is warranted....

  15. Deterministic automata for extended regular expressions

    Directory of Open Access Journals (Sweden)

    Syzdykov Mirzakhmet

    2017-12-01

    Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.

  16. Regularities of intermediate adsorption complex relaxation

    International Nuclear Information System (INIS)

    Manukova, L.A.

    1982-01-01

    The experimental data, characterizing the regularities of intermediate adsorption complex relaxation in the polycrystalline Mo-N 2 system at 77 K are given. The method of molecular beam has been used in the investigation. The analytical expressions of change regularity in the relaxation process of full and specific rates - of transition from intermediate state into ''non-reversible'', of desorption into the gas phase and accumUlation of the particles in the intermediate state are obtained

  17. Less is more: regularization perspectives on large scale machine learning

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Deep learning based techniques provide a possible solution at the expanse of theoretical guidance and, especially, of computational requirements. It is then a key challenge for large scale machine learning to devise approaches guaranteed to be accurate and yet computationally efficient. In this talk, we will consider a regularization perspectives on machine learning appealing to classical ideas in linear algebra and inverse problems to scale-up dramatically nonparametric methods such as kernel methods, often dismissed because of prohibitive costs. Our analysis derives optimal theoretical guarantees while providing experimental results at par or out-performing state of the art approaches.

  18. Explaining how honors students position themselves when collaborating with regular students

    NARCIS (Netherlands)

    Volker, Judith; Kamans, Elanor; Tiesinga, Lammert; Wolfensberger, Marca

    2015-01-01

    Paper presentatie tijdens de EARLI Conference 2015, Limassol, Cypres, 28 augustus. In this line of research we take a social psychological approach to understanding how honors students position themselves when collaborating with regular students. More specifically, we explore whether stereotypes

  19. A biological network-based regularized artificial neural network model for robust phenotype prediction from gene expression data.

    Science.gov (United States)

    Kang, Tianyu; Ding, Wei; Zhang, Luoyan; Ziemek, Daniel; Zarringhalam, Kourosh

    2017-12-19

    Stratification of patient subpopulations that respond favorably to treatment or experience and adverse reaction is an essential step toward development of new personalized therapies and diagnostics. It is currently feasible to generate omic-scale biological measurements for all patients in a study, providing an opportunity for machine learning models to identify molecular markers for disease diagnosis and progression. However, the high variability of genetic background in human populations hampers the reproducibility of omic-scale markers. In this paper, we develop a biological network-based regularized artificial neural network model for prediction of phenotype from transcriptomic measurements in clinical trials. To improve model sparsity and the overall reproducibility of the model, we incorporate regularization for simultaneous shrinkage of gene sets based on active upstream regulatory mechanisms into the model. We benchmark our method against various regression, support vector machines and artificial neural network models and demonstrate the ability of our method in predicting the clinical outcomes using clinical trial data on acute rejection in kidney transplantation and response to Infliximab in ulcerative colitis. We show that integration of prior biological knowledge into the classification as developed in this paper, significantly improves the robustness and generalizability of predictions to independent datasets. We provide a Java code of our algorithm along with a parsed version of the STRING DB database. In summary, we present a method for prediction of clinical phenotypes using baseline genome-wide expression data that makes use of prior biological knowledge on gene-regulatory interactions in order to increase robustness and reproducibility of omic-scale markers. The integrated group-wise regularization methods increases the interpretability of biological signatures and gives stable performance estimates across independent test sets.

  20. Manifold Regularized Multi-Task Feature Selection for Multi-Modality Classification in Alzheimer’s Disease

    Science.gov (United States)

    Jie, Biao; Cheng, Bo

    2014-01-01

    Accurate diagnosis of Alzheimer’s disease (AD), as well as its pro-dromal stage (i.e., mild cognitive impairment, MCI), is very important for possible delay and early treatment of the disease. Recently, multi-modality methods have been used for fusing information from multiple different and complementary imaging and non-imaging modalities. Although there are a number of existing multi-modality methods, few of them have addressed the problem of joint identification of disease-related brain regions from multi-modality data for classification. In this paper, we proposed a manifold regularized multi-task learning framework to jointly select features from multi-modality data. Specifically, we formulate the multi-modality classification as a multi-task learning framework, where each task focuses on the classification based on each modality. In order to capture the intrinsic relatedness among multiple tasks (i.e., modalities), we adopted a group sparsity regularizer, which ensures only a small number of features to be selected jointly. In addition, we introduced a new manifold based Laplacian regularization term to preserve the geometric distribution of original data from each task, which can lead to the selection of more discriminative features. Furthermore, we extend our method to the semi-supervised setting, which is very important since the acquisition of a large set of labeled data (i.e., diagnosis of disease) is usually expensive and time-consuming, while the collection of unlabeled data is relatively much easier. To validate our method, we have performed extensive evaluations on the baseline Magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET) data of Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. Our experimental results demonstrate the effectiveness of the proposed method. PMID:24505676

  1. Regular cell design approach considering lithography-induced process variations

    OpenAIRE

    Gómez Fernández, Sergio

    2014-01-01

    The deployment delays for EUVL, forces IC design to continue using 193nm wavelength lithography with innovative and costly techniques in order to faithfully print sub-wavelength features and combat lithography induced process variations. The effect of the lithography gap in current and upcoming technologies is to cause severe distortions due to optical diffraction in the printed patterns and thus degrading manufacturing yield. Therefore, a paradigm shift in layout design is mandatory towards ...

  2. Producing design objects from regular polyhedra: A Pratical approach

    OpenAIRE

    Polimeni, Beniamino

    2017-01-01

    In the last few years, digital modeling techniques have played a major role in Architecture and design, influencing, at the same time, the creative process and the way the objects are fabricated. This revolution has produced a new fertile generation of architects and designers focused on the expanding possibilities of material and formal production reinforcing the idea of architecture as an interaction between art and artisanship. This innovative perspective inspires this paper, w...

  3. Robust regularized least-squares beamforming approach to signal estimation

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Al-Naffouri, Tareq Y.

    2017-01-01

    In this paper, we address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold. Firstly, the process requires the inversion of the usually ill

  4. Improvements in GRACE Gravity Fields Using Regularization

    Science.gov (United States)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or

  5. Do the majority of South Africans regularly consult traditional healers?

    Directory of Open Access Journals (Sweden)

    Gabriel Louw

    2016-12-01

    Full Text Available Background The statutory recognition of traditional healers as healthcare practitioners in South Africa in terms of the Traditional Health Practitioners Act 22 of 2007 is based on various assumptions, opinions and generalizations. One of the prominent views is that the majority of South Africans regularly consult traditional healers. It even has been alleged that this number can be as high as 80 per cent of the South African population. For medical doctors and other health practitioners registered with the Health Professions Council of South Africa (HPCSA, this new statutory status of traditional health practitioners, means the required presence of not only a healthcare competitor that can overstock the healthcare market with service lending, medical claims and healthcare costs, but also a competitor prone to malpractice. Aims The study aimed to determine if the majority of South Africans regularly consult traditional healers. Methods This is an exploratory and descriptive study following the modern historical approach of investigation and literature review. The emphasis is on using current documentation like articles, books and newspapers, as primary sources to determine if the majority of South Africans regularly consult traditional healers. The findings are offered in narrative form. Results It is clear that there is no trustworthy statistics on the percentages of South Africans using traditional healers. A scientific survey is needed to determine the extent to which traditional healers are consulted. This will only be possible after the Traditional Health Practitioners Act No 22 has been fully enacted and traditional health practitioners have become fully active in the healthcare sector. Conclusion In poorer, rural areas no more than 11.2 per cent of the South African population regularly consult traditional healers, while the figure for the total population seems to be no more than 1.4 per cent. The argument that the majority of South

  6. A Hybrid Supervised/Unsupervised Machine Learning Approach to Solar Flare Prediction

    Science.gov (United States)

    Benvenuto, Federico; Piana, Michele; Campi, Cristina; Massone, Anna Maria

    2018-01-01

    This paper introduces a novel method for flare forecasting, combining prediction accuracy with the ability to identify the most relevant predictive variables. This result is obtained by means of a two-step approach: first, a supervised regularization method for regression, namely, LASSO is applied, where a sparsity-enhancing penalty term allows the identification of the significance with which each data feature contributes to the prediction; then, an unsupervised fuzzy clustering technique for classification, namely, Fuzzy C-Means, is applied, where the regression outcome is partitioned through the minimization of a cost function and without focusing on the optimization of a specific skill score. This approach is therefore hybrid, since it combines supervised and unsupervised learning; realizes classification in an automatic, skill-score-independent way; and provides effective prediction performances even in the case of imbalanced data sets. Its prediction power is verified against NOAA Space Weather Prediction Center data, using as a test set, data in the range between 1996 August and 2010 December and as training set, data in the range between 1988 December and 1996 June. To validate the method, we computed several skill scores typically utilized in flare prediction and compared the values provided by the hybrid approach with the ones provided by several standard (non-hybrid) machine learning methods. The results showed that the hybrid approach performs classification better than all other supervised methods and with an effectiveness comparable to the one of clustering methods; but, in addition, it provides a reliable ranking of the weights with which the data properties contribute to the forecast.

  7. Regular Expression Matching and Operational Semantics

    Directory of Open Access Journals (Sweden)

    Asiri Rathnayake

    2011-08-01

    Full Text Available Many programming languages and tools, ranging from grep to the Java String library, contain regular expression matchers. Rather than first translating a regular expression into a deterministic finite automaton, such implementations typically match the regular expression on the fly. Thus they can be seen as virtual machines interpreting the regular expression much as if it were a program with some non-deterministic constructs such as the Kleene star. We formalize this implementation technique for regular expression matching using operational semantics. Specifically, we derive a series of abstract machines, moving from the abstract definition of matching to increasingly realistic machines. First a continuation is added to the operational semantics to describe what remains to be matched after the current expression. Next, we represent the expression as a data structure using pointers, which enables redundant searches to be eliminated via testing for pointer equality. From there, we arrive both at Thompson's lockstep construction and a machine that performs some operations in parallel, suitable for implementation on a large number of cores, such as a GPU. We formalize the parallel machine using process algebra and report some preliminary experiments with an implementation on a graphics processor using CUDA.

  8. Regularities, Natural Patterns and Laws of Nature

    Directory of Open Access Journals (Sweden)

    Stathis Psillos

    2014-02-01

    Full Text Available  The goal of this paper is to sketch an empiricist metaphysics of laws of nature. The key idea is that there are regularities without regularity-enforcers. Differently put, there are natural laws without law-makers of a distinct metaphysical kind. This sketch will rely on the concept of a natural pattern and more significantly on the existence of a network of natural patterns in nature. The relation between a regularity and a pattern will be analysed in terms of mereology.  Here is the road map. In section 2, I will briefly discuss the relation between empiricism and metaphysics, aiming to show that an empiricist metaphysics is possible. In section 3, I will offer arguments against stronger metaphysical views of laws. Then, in section 4 I will motivate nomic objectivism. In section 5, I will address the question ‘what is a regularity?’ and will develop a novel answer to it, based on the notion of a natural pattern. In section 6, I will raise the question: ‘what is a law of nature?’, the answer to which will be: a law of nature is a regularity that is characterised by the unity of a natural pattern.

  9. Determining the flexibility of regular and chaotic attractors

    International Nuclear Information System (INIS)

    Marhl, Marko; Perc, Matjaz

    2006-01-01

    We present an overview of measures that are appropriate for determining the flexibility of regular and chaotic attractors. In particular, we focus on those system properties that constitute its responses to external perturbations. We deploy a systematic approach, first introducing the simplest measure given by the local divergence of the system along the attractor, and then develop more rigorous mathematical tools for estimating the flexibility of the system's dynamics. The presented measures are tested on the regular Brusselator and chaotic Hindmarsh-Rose model of an excitable neuron with equal success, thus indicating the overall effectiveness and wide applicability range of the proposed theory. Since responses of dynamical systems to external signals are crucial in several scientific disciplines, and especially in natural sciences, we discuss several important aspects and biological implications of obtained results

  10. Regular perturbation theory for two-electron atoms

    International Nuclear Information System (INIS)

    Feranchuk, I.D.; Triguk, V.V.

    2011-01-01

    Regular perturbation theory (RPT) for the ground and excited states of two-electron atoms or ions is developed. It is shown for the first time that summation of the matrix elements from the electron-electron interaction operator over all intermediate states can be calculated in a closed form by means of the two-particle Coulomb Green's function constructed in the Letter. It is shown that the second order approximation of RPT includes the main part of the correlation energy both for the ground and excited states. This approach can be also useful for description of two-electron atoms in external fields. -- Highlights: → We develop regular perturbation theory for the two-electron atoms or ions. → We calculate the sum of the matrix elements over all intermediate states. → We construct the two-particle Coulomb Green's function.

  11. Regularization of the double period method for experimental data processing

    Science.gov (United States)

    Belov, A. A.; Kalitkin, N. N.

    2017-11-01

    In physical and technical applications, an important task is to process experimental curves measured with large errors. Such problems are solved by applying regularization methods, in which success depends on the mathematician's intuition. We propose an approximation based on the double period method developed for smooth nonperiodic functions. Tikhonov's stabilizer with a squared second derivative is used for regularization. As a result, the spurious oscillations are suppressed and the shape of an experimental curve is accurately represented. This approach offers a universal strategy for solving a broad class of problems. The method is illustrated by approximating cross sections of nuclear reactions important for controlled thermonuclear fusion. Tables recommended as reference data are obtained. These results are used to calculate the reaction rates, which are approximated in a way convenient for gasdynamic codes. These approximations are superior to previously known formulas in the covered temperature range and accuracy.

  12. Point-splitting regularization of composite operators and anomalies

    International Nuclear Information System (INIS)

    Novotny, J.; Schnabl, M.

    2000-01-01

    The point-splitting regularization technique for composite operators is discussed in connection with anomaly calculation. We present a pedagogical and self-contained review of the topic with an emphasis on the technical details. We also develop simple algebraic tools to handle the path ordered exponential insertions used within the covariant and non-covariant version of the point-splitting method. The method is then applied to the calculation of the chiral, vector, trace, translation and Lorentz anomalies within diverse versions of the point-splitting regularization and a connection between the results is described. As an alternative to the standard approach we use the idea of deformed point-split transformation and corresponding Ward-Takahashi identities rather than an application of the equation of motion, which seems to reduce the complexity of the calculations. (orig.)

  13. Partial Regularity for Holonomic Minimisers of Quasiconvex Functionals

    Science.gov (United States)

    Hopper, Christopher P.

    2016-10-01

    We prove partial regularity for local minimisers of certain strictly quasiconvex integral functionals, over a class of Sobolev mappings into a compact Riemannian manifold, to which such mappings are said to be holonomically constrained. Our approach uses the lifting of Sobolev mappings to the universal covering space, the connectedness of the covering space, an application of Ekeland's variational principle and a certain tangential A-harmonic approximation lemma obtained directly via a Lipschitz approximation argument. This allows regularity to be established directly on the level of the gradient. Several applications to variational problems in condensed matter physics with broken symmetries are also discussed, in particular those concerning the superfluidity of liquid helium-3 and nematic liquid crystals.

  14. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Cheng-Yuan Shih

    2010-01-01

    Full Text Available This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA and quadratic discriminant analysis (QDA. It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  15. Condom use and intimacy among Tajik male migrants and their regular female partners in Moscow.

    Science.gov (United States)

    Zabrocki, Christopher; Polutnik, Chloe; Jonbekov, Jonbek; Shoakova, Farzona; Bahromov, Mahbat; Weine, Stevan

    2015-01-01

    This study examined condom use and intimacy among Tajik male migrants and their regular female partners in Moscow, Russia. This study included a survey of 400 Tajik male labour migrants and longitudinal ethnographic interviews with 30 of the surveyed male migrants and 30 of their regular female partners. of the surveyed male migrants, 351 (88%) reported having a regular female partner in Moscow. Findings demonstrated that the migrants' and regular partners' intentions to use condoms diminished with increased intimacy, yet each party perceived intimacy differently. Migrants' intimacy with regular partners was determined by their familiarity and the perceived sexual cleanliness of their partner. Migrants believed that Muslim women were cleaner than Orthodox Christian women and reported using condoms more frequently with Orthodox Christian regular partners. Regular partners reported determining intimacy based on the perceived commitment of the male migrant. When perceived commitment faced a crisis, intimacy declined and regular partners renegotiated condom use. The association between intimacy and condom use suggests that HIV-prevention programmes should aim to help male migrants and female regular partners to dissociate their approaches to condom use from their perceptions of intimacy.

  16. Local regularity analysis of strata heterogeneities from sonic logs

    Directory of Open Access Journals (Sweden)

    S. Gaci

    2010-09-01

    Full Text Available Borehole logs provide geological information about the rocks crossed by the wells. Several properties of rocks can be interpreted in terms of lithology, type and quantity of the fluid filling the pores and fractures.

    Here, the logs are assumed to be nonhomogeneous Brownian motions (nhBms which are generalized fractional Brownian motions (fBms indexed by depth-dependent Hurst parameters H(z. Three techniques, the local wavelet approach (LWA, the average-local wavelet approach (ALWA, and Peltier Algorithm (PA, are suggested to estimate the Hurst functions (or the regularity profiles from the logs.

    First, two synthetic sonic logs with different parameters, shaped by the successive random additions (SRA algorithm, are used to demonstrate the potential of the proposed methods. The obtained Hurst functions are close to the theoretical Hurst functions. Besides, the transitions between the modeled layers are marked by Hurst values discontinuities. It is also shown that PA leads to the best Hurst value estimations.

    Second, we investigate the multifractional property of sonic logs data recorded at two scientific deep boreholes: the pilot hole VB and the ultra deep main hole HB, drilled for the German Continental Deep Drilling Program (KTB. All the regularity profiles independently obtained for the logs provide a clear correlation with lithology, and from each regularity profile, we derive a similar segmentation in terms of lithological units. The lithological discontinuities (strata' bounds and faults contacts are located at the local extrema of the Hurst functions. Moreover, the regularity profiles are compared with the KTB estimated porosity logs, showing a significant relation between the local extrema of the Hurst functions and the fluid-filled fractures. The Hurst function may then constitute a tool to characterize underground heterogeneities.

  17. Regular non-twisting S-branes

    International Nuclear Information System (INIS)

    Obregon, Octavio; Quevedo, Hernando; Ryan, Michael P.

    2004-01-01

    We construct a family of time and angular dependent, regular S-brane solutions which corresponds to a simple analytical continuation of the Zipoy-Voorhees 4-dimensional vacuum spacetime. The solutions are asymptotically flat and turn out to be free of singularities without requiring a twist in space. They can be considered as the simplest non-singular generalization of the singular S0-brane solution. We analyze the properties of a representative of this family of solutions and show that it resembles to some extent the asymptotic properties of the regular Kerr S-brane. The R-symmetry corresponds, however, to the general lorentzian symmetry. Several generalizations of this regular solution are derived which include a charged S-brane and an additional dilatonic field. (author)

  18. Regular transport dynamics produce chaotic travel times.

    Science.gov (United States)

    Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro

    2014-06-01

    In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.

  19. Regularity of difference equations on Banach spaces

    CERN Document Server

    Agarwal, Ravi P; Lizama, Carlos

    2014-01-01

    This work introduces readers to the topic of maximal regularity for difference equations. The authors systematically present the method of maximal regularity, outlining basic linear difference equations along with relevant results. They address recent advances in the field, as well as basic semigroup and cosine operator theories in the discrete setting. The authors also identify some open problems that readers may wish to take up for further research. This book is intended for graduate students and researchers in the area of difference equations, particularly those with advance knowledge of and interest in functional analysis.

  20. PET regularization by envelope guided conjugate gradients

    International Nuclear Information System (INIS)

    Kaufman, L.; Neumaier, A.

    1996-01-01

    The authors propose a new way to iteratively solve large scale ill-posed problems and in particular the image reconstruction problem in positron emission tomography by exploiting the relation between Tikhonov regularization and multiobjective optimization to obtain iteratively approximations to the Tikhonov L-curve and its corner. Monitoring the change of the approximate L-curves allows us to adjust the regularization parameter adaptively during a preconditioned conjugate gradient iteration, so that the desired solution can be reconstructed with a small number of iterations

  1. Matrix regularization of embedded 4-manifolds

    International Nuclear Information System (INIS)

    Trzetrzelewski, Maciej

    2012-01-01

    We consider products of two 2-manifolds such as S 2 ×S 2 , embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)⊗SU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N 2 ×N 2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S 3 also possible).

  2. Chord length distributions between hard disks and spheres in regular, semi-regular, and quasi-random structures

    International Nuclear Information System (INIS)

    Olson, Gordon L.

    2008-01-01

    In binary stochastic media in two- and three-dimensions consisting of randomly placed impenetrable disks or spheres, the chord lengths in the background material between disks and spheres closely follow exponential distributions if the disks and spheres occupy less than 10% of the medium. This work demonstrates that for regular spatial structures of disks and spheres, the tails of the chord length distributions (CLDs) follow power laws rather than exponentials. In dilute media, when the disks and spheres are widely spaced, the slope of the power law seems to be independent of the details of the structure. When approaching a close-packed arrangement, the exact placement of the spheres can make a significant difference. When regular structures are perturbed by small random displacements, the CLDs become power laws with steeper slopes. An example CLD from a quasi-random distribution of spheres in clusters shows a modified exponential distribution

  3. Chord length distributions between hard disks and spheres in regular, semi-regular, and quasi-random structures

    Energy Technology Data Exchange (ETDEWEB)

    Olson, Gordon L. [Computer and Computational Sciences Division (CCS-2), Los Alamos National Laboratory, 5 Foxglove Circle, Madison, WI 53717 (United States)], E-mail: olson99@tds.net

    2008-11-15

    In binary stochastic media in two- and three-dimensions consisting of randomly placed impenetrable disks or spheres, the chord lengths in the background material between disks and spheres closely follow exponential distributions if the disks and spheres occupy less than 10% of the medium. This work demonstrates that for regular spatial structures of disks and spheres, the tails of the chord length distributions (CLDs) follow power laws rather than exponentials. In dilute media, when the disks and spheres are widely spaced, the slope of the power law seems to be independent of the details of the structure. When approaching a close-packed arrangement, the exact placement of the spheres can make a significant difference. When regular structures are perturbed by small random displacements, the CLDs become power laws with steeper slopes. An example CLD from a quasi-random distribution of spheres in clusters shows a modified exponential distribution.

  4. Relaxation Methods for Strictly Convex Regularizations of Piecewise Linear Programs

    International Nuclear Information System (INIS)

    Kiwiel, K. C.

    1998-01-01

    We give an algorithm for minimizing the sum of a strictly convex function and a convex piecewise linear function. It extends several dual coordinate ascent methods for large-scale linearly constrained problems that occur in entropy maximization, quadratic programming, and network flows. In particular, it may solve exact penalty versions of such (possibly inconsistent) problems, and subproblems of bundle methods for nondifferentiable optimization. It is simple, can exploit sparsity, and in certain cases is highly parallelizable. Its global convergence is established in the recent framework of B -functions (generalized Bregman functions)

  5. On a correspondence between regular and non-regular operator monotone functions

    DEFF Research Database (Denmark)

    Gibilisco, P.; Hansen, Frank; Isola, T.

    2009-01-01

    We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....

  6. On the theory of drainage area for regular and non-regular points

    Science.gov (United States)

    Bonetti, S.; Bragg, A. D.; Porporato, A.

    2018-03-01

    The drainage area is an important, non-local property of a landscape, which controls surface and subsurface hydrological fluxes. Its role in numerous ecohydrological and geomorphological applications has given rise to several numerical methods for its computation. However, its theoretical analysis has lagged behind. Only recently, an analytical definition for the specific catchment area was proposed (Gallant & Hutchinson. 2011 Water Resour. Res. 47, W05535. (doi:10.1029/2009WR008540)), with the derivation of a differential equation whose validity is limited to regular points of the watershed. Here, we show that such a differential equation can be derived from a continuity equation (Chen et al. 2014 Geomorphology 219, 68-86. (doi:10.1016/j.geomorph.2014.04.037)) and extend the theory to critical and singular points both by applying Gauss's theorem and by means of a dynamical systems approach to define basins of attraction of local surface minima. Simple analytical examples as well as applications to more complex topographic surfaces are examined. The theoretical description of topographic features and properties, such as the drainage area, channel lines and watershed divides, can be broadly adopted to develop and test the numerical algorithms currently used in digital terrain analysis for the computation of the drainage area, as well as for the theoretical analysis of landscape evolution and stability.

  7. Regularity and irreversibility of weekly travel behavior

    NARCIS (Netherlands)

    Kitamura, R.; van der Hoorn, A.I.J.M.

    1987-01-01

    Dynamic characteristics of travel behavior are analyzed in this paper using weekly travel diaries from two waves of panel surveys conducted six months apart. An analysis of activity engagement indicates the presence of significant regularity in weekly activity participation between the two waves.

  8. Regular and context-free nominal traces

    DEFF Research Database (Denmark)

    Degano, Pierpaolo; Ferrari, Gian-Luigi; Mezzetti, Gianluca

    2017-01-01

    Two kinds of automata are presented, for recognising new classes of regular and context-free nominal languages. We compare their expressive power with analogous proposals in the literature, showing that they express novel classes of languages. Although many properties of classical languages hold ...

  9. Faster 2-regular information-set decoding

    NARCIS (Netherlands)

    Bernstein, D.J.; Lange, T.; Peters, C.P.; Schwabe, P.; Chee, Y.M.

    2011-01-01

    Fix positive integers B and w. Let C be a linear code over F 2 of length Bw. The 2-regular-decoding problem is to find a nonzero codeword consisting of w length-B blocks, each of which has Hamming weight 0 or 2. This problem appears in attacks on the FSB (fast syndrome-based) hash function and

  10. Complexity in union-free regular languages

    Czech Academy of Sciences Publication Activity Database

    Jirásková, G.; Masopust, Tomáš

    2011-01-01

    Roč. 22, č. 7 (2011), s. 1639-1653 ISSN 0129-0541 Institutional research plan: CEZ:AV0Z10190503 Keywords : Union-free regular language * one-cycle-free-path automaton * descriptional complexity Subject RIV: BA - General Mathematics Impact factor: 0.379, year: 2011 http://www.worldscinet.com/ijfcs/22/2207/S0129054111008933.html

  11. Regular Gleason Measures and Generalized Effect Algebras

    Science.gov (United States)

    Dvurečenskij, Anatolij; Janda, Jiří

    2015-12-01

    We study measures, finitely additive measures, regular measures, and σ-additive measures that can attain even infinite values on the quantum logic of a Hilbert space. We show when particular classes of non-negative measures can be studied in the frame of generalized effect algebras.

  12. Regularization of finite temperature string theories

    International Nuclear Information System (INIS)

    Leblanc, Y.; Knecht, M.; Wallet, J.C.

    1990-01-01

    The tachyonic divergences occurring in the free energy of various string theories at finite temperature are eliminated through the use of regularization schemes and analytic continuations. For closed strings, we obtain finite expressions which, however, develop an imaginary part above the Hagedorn temperature, whereas open string theories are still plagued with dilatonic divergences. (orig.)

  13. A Sim(2 invariant dimensional regularization

    Directory of Open Access Journals (Sweden)

    J. Alfaro

    2017-09-01

    Full Text Available We introduce a Sim(2 invariant dimensional regularization of loop integrals. Then we can compute the one loop quantum corrections to the photon self energy, electron self energy and vertex in the Electrodynamics sector of the Very Special Relativity Standard Model (VSRSM.

  14. Continuum regularized Yang-Mills theory

    International Nuclear Information System (INIS)

    Sadun, L.A.

    1987-01-01

    Using the machinery of stochastic quantization, Z. Bern, M. B. Halpern, C. Taubes and I recently proposed a continuum regularization technique for quantum field theory. This regularization may be implemented by applying a regulator to either the (d + 1)-dimensional Parisi-Wu Langevin equation or, equivalently, to the d-dimensional second order Schwinger-Dyson (SD) equations. This technique is non-perturbative, respects all gauge and Lorentz symmetries, and is consistent with a ghost-free gauge fixing (Zwanziger's). This thesis is a detailed study of this regulator, and of regularized Yang-Mills theory, using both perturbative and non-perturbative techniques. The perturbative analysis comes first. The mechanism of stochastic quantization is reviewed, and a perturbative expansion based on second-order SD equations is developed. A diagrammatic method (SD diagrams) for evaluating terms of this expansion is developed. We apply the continuum regulator to a scalar field theory. Using SD diagrams, we show that all Green functions can be rendered finite to all orders in perturbation theory. Even non-renormalizable theories can be regularized. The continuum regulator is then applied to Yang-Mills theory, in conjunction with Zwanziger's gauge fixing. A perturbative expansion of the regulator is incorporated into the diagrammatic method. It is hoped that the techniques discussed in this thesis will contribute to the construction of a renormalized Yang-Mills theory is 3 and 4 dimensions

  15. Gravitational lensing by a regular black hole

    International Nuclear Information System (INIS)

    Eiroa, Ernesto F; Sendra, Carlos M

    2011-01-01

    In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.

  16. Gravitational lensing by a regular black hole

    Energy Technology Data Exchange (ETDEWEB)

    Eiroa, Ernesto F; Sendra, Carlos M, E-mail: eiroa@iafe.uba.ar, E-mail: cmsendra@iafe.uba.ar [Instituto de Astronomia y Fisica del Espacio, CC 67, Suc. 28, 1428, Buenos Aires (Argentina)

    2011-04-21

    In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.

  17. Analytic stochastic regularization and gange invariance

    International Nuclear Information System (INIS)

    Abdalla, E.; Gomes, M.; Lima-Santos, A.

    1986-05-01

    A proof that analytic stochastic regularization breaks gauge invariance is presented. This is done by an explicit one loop calculation of the vaccum polarization tensor in scalar electrodynamics, which turns out not to be transversal. The counterterm structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization, are also analysed. (Author) [pt

  18. Annotation of regular polysemy and underspecification

    DEFF Research Database (Denmark)

    Martínez Alonso, Héctor; Pedersen, Bolette Sandford; Bel, Núria

    2013-01-01

    We present the result of an annotation task on regular polysemy for a series of seman- tic classes or dot types in English, Dan- ish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods...

  19. Stabilization, pole placement, and regular implementability

    NARCIS (Netherlands)

    Belur, MN; Trentelman, HL

    In this paper, we study control by interconnection of linear differential systems. We give necessary and sufficient conditions for regular implementability of a-given linear, differential system. We formulate the problems of stabilization and pole placement as problems of finding a suitable,

  20. 12 CFR 725.3 - Regular membership.

    Science.gov (United States)

    2010-01-01

    ... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit....5(b) of this part, and forwarding with its completed application funds equal to one-half of this... 1, 1979, is not required to forward these funds to the Facility until October 1, 1979. (3...

  1. Supervised scale-regularized linear convolutionary filters

    DEFF Research Database (Denmark)

    Loog, Marco; Lauze, Francois Bernard

    2017-01-01

    also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a specific regularization term into the overall objective function. We demonstrate, on an artificial filter learning problem, the capabil- ities of our basic...

  2. On regular riesz operators | Raubenheimer | Quaestiones ...

    African Journals Online (AJOL)

    The r-asymptotically quasi finite rank operators on Banach lattices are examples of regular Riesz operators. We characterise Riesz elements in a subalgebra of a Banach algebra in terms of Riesz elements in the Banach algebra. This enables us to characterise r-asymptotically quasi finite rank operators in terms of adjoint ...

  3. Regularized Discriminant Analysis: A Large Dimensional Study

    KAUST Repository

    Yang, Xiaoke

    2018-04-28

    In this thesis, we focus on studying the performance of general regularized discriminant analysis (RDA) classifiers. The data used for analysis is assumed to follow Gaussian mixture model with different means and covariances. RDA offers a rich class of regularization options, covering as special cases the regularized linear discriminant analysis (RLDA) and the regularized quadratic discriminant analysis (RQDA) classi ers. We analyze RDA under the double asymptotic regime where the data dimension and the training size both increase in a proportional way. This double asymptotic regime allows for application of fundamental results from random matrix theory. Under the double asymptotic regime and some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that only depends on the data statistical parameters and dimensions. This result not only implicates some mathematical relations between the misclassification error and the class statistics, but also can be leveraged to select the optimal parameters that minimize the classification error, thus yielding the optimal classifier. Validation results on the synthetic data show a good accuracy of our theoretical findings. We also construct a general consistent estimator to approximate the true classification error in consideration of the unknown previous statistics. We benchmark the performance of our proposed consistent estimator against classical estimator on synthetic data. The observations demonstrate that the general estimator outperforms others in terms of mean squared error (MSE).

  4. Complexity in union-free regular languages

    Czech Academy of Sciences Publication Activity Database

    Jirásková, G.; Masopust, Tomáš

    2011-01-01

    Roč. 22, č. 7 (2011), s. 1639-1653 ISSN 0129-0541 Institutional research plan: CEZ:AV0Z10190503 Keywords : Union-free regular language * one-cycle-free- path automaton * descriptional complexity Subject RIV: BA - General Mathematics Impact factor: 0.379, year: 2011 http://www.worldscinet.com/ijfcs/22/2207/S0129054111008933.html

  5. Bit-coded regular expression parsing

    DEFF Research Database (Denmark)

    Nielsen, Lasse; Henglein, Fritz

    2011-01-01

    the DFA-based parsing algorithm due to Dub ´e and Feeley to emit the bits of the bit representation without explicitly materializing the parse tree itself. We furthermore show that Frisch and Cardelli’s greedy regular expression parsing algorithm can be straightforwardly modified to produce bit codings...

  6. Tetravalent one-regular graphs of order 4p2

    DEFF Research Database (Denmark)

    Feng, Yan-Quan; Kutnar, Klavdija; Marusic, Dragan

    2014-01-01

    A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified.......A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified....

  7. Perturbation-Based Regularization for Signal Estimation in Linear Discrete Ill-posed Problems

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Al-Naffouri, Tareq Y.

    2016-01-01

    Estimating the values of unknown parameters from corrupted measured data faces a lot of challenges in ill-posed problems. In such problems, many fundamental estimation methods fail to provide a meaningful stabilized solution. In this work, we propose a new regularization approach and a new regularization parameter selection approach for linear least-squares discrete ill-posed problems. The proposed approach is based on enhancing the singular-value structure of the ill-posed model matrix to acquire a better solution. Unlike many other regularization algorithms that seek to minimize the estimated data error, the proposed approach is developed to minimize the mean-squared error of the estimator which is the objective in many typical estimation scenarios. The performance of the proposed approach is demonstrated by applying it to a large set of real-world discrete ill-posed problems. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods in most cases. In addition, the approach also enjoys the lowest runtime and offers the highest level of robustness amongst all the tested benchmark regularization methods.

  8. Perturbation-Based Regularization for Signal Estimation in Linear Discrete Ill-posed Problems

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-11-29

    Estimating the values of unknown parameters from corrupted measured data faces a lot of challenges in ill-posed problems. In such problems, many fundamental estimation methods fail to provide a meaningful stabilized solution. In this work, we propose a new regularization approach and a new regularization parameter selection approach for linear least-squares discrete ill-posed problems. The proposed approach is based on enhancing the singular-value structure of the ill-posed model matrix to acquire a better solution. Unlike many other regularization algorithms that seek to minimize the estimated data error, the proposed approach is developed to minimize the mean-squared error of the estimator which is the objective in many typical estimation scenarios. The performance of the proposed approach is demonstrated by applying it to a large set of real-world discrete ill-posed problems. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods in most cases. In addition, the approach also enjoys the lowest runtime and offers the highest level of robustness amongst all the tested benchmark regularization methods.

  9. SAR image regularization with fast approximate discrete minimization.

    Science.gov (United States)

    Denis, Loïc; Tupin, Florence; Darbon, Jérôme; Sigelle, Marc

    2009-07-01

    Synthetic aperture radar (SAR) images, like other coherent imaging modalities, suffer from speckle noise. The presence of this noise makes the automatic interpretation of images a challenging task and noise reduction is often a prerequisite for successful use of classical image processing algorithms. Numerous approaches have been proposed to filter speckle noise. Markov random field (MRF) modelization provides a convenient way to express both data fidelity constraints and desirable properties of the filtered image. In this context, total variation minimization has been extensively used to constrain the oscillations in the regularized image while preserving its edges. Speckle noise follows heavy-tailed distributions, and the MRF formulation leads to a minimization problem involving nonconvex log-likelihood terms. Such a minimization can be performed efficiently by computing minimum cuts on weighted graphs. Due to memory constraints, exact minimization, although theoretically possible, is not achievable on large images required by remote sensing applications. The computational burden of the state-of-the-art algorithm for approximate minimization (namely the alpha -expansion) is too heavy specially when considering joint regularization of several images. We show that a satisfying solution can be reached, in few iterations, by performing a graph-cut-based combinatorial exploration of large trial moves. This algorithm is applied to joint regularization of the amplitude and interferometric phase in urban area SAR images.

  10. Nonlocal Regularized Algebraic Reconstruction Techniques for MRI: An Experimental Study

    Directory of Open Access Journals (Sweden)

    Xin Li

    2013-01-01

    Full Text Available We attempt to revitalize researchers' interest in algebraic reconstruction techniques (ART by expanding their capabilities and demonstrating their potential in speeding up the process of MRI acquisition. Using a continuous-to-discrete model, we experimentally study the application of ART into MRI reconstruction which unifies previous nonuniform-fast-Fourier-transform- (NUFFT- based and gridding-based approaches. Under the framework of ART, we advocate the use of nonlocal regularization techniques which are leveraged from our previous research on modeling photographic images. It is experimentally shown that nonlocal regularization ART (NR-ART can often outperform their local counterparts in terms of both subjective and objective qualities of reconstructed images. On one real-world k-space data set, we find that nonlocal regularization can achieve satisfactory reconstruction from as few as one-third of samples. We also address an issue related to image reconstruction from real-world k-space data but overlooked in the open literature: the consistency of reconstructed images across different resolutions. A resolution-consistent extension of NR-ART is developed and shown to effectively suppress the artifacts arising from frequency extrapolation. Both source codes and experimental results of this work are made fully reproducible.

  11. A Large Dimensional Analysis of Regularized Discriminant Analysis Classifiers

    KAUST Repository

    Elkhalil, Khalil

    2017-11-01

    This article carries out a large dimensional analysis of standard regularized discriminant analysis classifiers designed on the assumption that data arise from a Gaussian mixture model with different means and covariances. The analysis relies on fundamental results from random matrix theory (RMT) when both the number of features and the cardinality of the training data within each class grow large at the same pace. Under mild assumptions, we show that the asymptotic classification error approaches a deterministic quantity that depends only on the means and covariances associated with each class as well as the problem dimensions. Such a result permits a better understanding of the performance of regularized discriminant analsysis, in practical large but finite dimensions, and can be used to determine and pre-estimate the optimal regularization parameter that minimizes the misclassification error probability. Despite being theoretically valid only for Gaussian data, our findings are shown to yield a high accuracy in predicting the performances achieved with real data sets drawn from the popular USPS data base, thereby making an interesting connection between theory and practice.

  12. Real time QRS complex detection using DFA and regular grammar.

    Science.gov (United States)

    Hamdi, Salah; Ben Abdallah, Asma; Bedoui, Mohamed Hedi

    2017-02-28

    The sequence of Q, R, and S peaks (QRS) complex detection is a crucial procedure in electrocardiogram (ECG) processing and analysis. We propose a novel approach for QRS complex detection based on the deterministic finite automata with the addition of some constraints. This paper confirms that regular grammar is useful for extracting QRS complexes and interpreting normalized ECG signals. A QRS is assimilated to a pair of adjacent peaks which meet certain criteria of standard deviation and duration. The proposed method was applied on several kinds of ECG signals issued from the standard MIT-BIH arrhythmia database. A total of 48 signals were used. For an input signal, several parameters were determined, such as QRS durations, RR distances, and the peaks' amplitudes. σRR and σQRS parameters were added to quantify the regularity of RR distances and QRS durations, respectively. The sensitivity rate of the suggested method was 99.74% and the specificity rate was 99.86%. Moreover, the sensitivity and the specificity rates variations according to the Signal-to-Noise Ratio were performed. Regular grammar with the addition of some constraints and deterministic automata proved functional for ECG signals diagnosis. Compared to statistical methods, the use of grammar provides satisfactory and competitive results and indices that are comparable to or even better than those cited in the literature.

  13. Enhanced manifold regularization for semi-supervised classification.

    Science.gov (United States)

    Gan, Haitao; Luo, Zhizeng; Fan, Yingle; Sang, Nong

    2016-06-01

    Manifold regularization (MR) has become one of the most widely used approaches in the semi-supervised learning field. It has shown superiority by exploiting the local manifold structure of both labeled and unlabeled data. The manifold structure is modeled by constructing a Laplacian graph and then incorporated in learning through a smoothness regularization term. Hence the labels of labeled and unlabeled data vary smoothly along the geodesics on the manifold. However, MR has ignored the discriminative ability of the labeled and unlabeled data. To address the problem, we propose an enhanced MR framework for semi-supervised classification in which the local discriminative information of the labeled and unlabeled data is explicitly exploited. To make full use of labeled data, we firstly employ a semi-supervised clustering method to discover the underlying data space structure of the whole dataset. Then we construct a local discrimination graph to model the discriminative information of labeled and unlabeled data according to the discovered intrinsic structure. Therefore, the data points that may be from different clusters, though similar on the manifold, are enforced far away from each other. Finally, the discrimination graph is incorporated into the MR framework. In particular, we utilize semi-supervised fuzzy c-means and Laplacian regularized Kernel minimum squared error for semi-supervised clustering and classification, respectively. Experimental results on several benchmark datasets and face recognition demonstrate the effectiveness of our proposed method.

  14. GLOBAL OPTIMIZATION METHODS FOR GRAVITATIONAL LENS SYSTEMS WITH REGULARIZED SOURCES

    International Nuclear Information System (INIS)

    Rogers, Adam; Fiege, Jason D.

    2012-01-01

    Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters; the second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions, we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach by applying our code to a subset of the lens systems included in the SLACS survey.

  15. Regularization in global sound equalization based on effort variation

    DEFF Research Database (Denmark)

    Stefanakis, Nick; Sarris, John; Jacobsen, Finn

    2009-01-01

    . Effort variation equalization involves modifying the conventional cost function in sound equalization, which is based on minimizing least-squares reproduction errors, by adding a term that is proportional to the squared deviations between complex source strengths, calculated independently for the sources......Sound equalization in closed spaces can be significantly improved by generating propagating waves that are naturally associated with the geometry, as, for example, plane waves in rectangular enclosures. This paper presents a control approach termed effort variation regularization based on this idea...

  16. Analytic semigroups and optimal regularity in parabolic problems

    CERN Document Server

    Lunardi, Alessandra

    2012-01-01

    The book shows how the abstract methods of analytic semigroups and evolution equations in Banach spaces can be fruitfully applied to the study of parabolic problems. Particular attention is paid to optimal regularity results in linear equations. Furthermore, these results are used to study several other problems, especially fully nonlinear ones. Owing to the new unified approach chosen, known theorems are presented from a novel perspective and new results are derived. The book is self-contained. It is addressed to PhD students and researchers interested in abstract evolution equations and in p

  17. Female non-regular workers in Japan: their current status and health.

    Science.gov (United States)

    Inoue, Mariko; Nishikitani, Mariko; Tsurugano, Shinobu

    2016-12-07

    The participation of women in the Japanese labor force is characterized by its M-shaped curve, which reflects decreased employment rates during child-rearing years. Although, this M-shaped curve is now improving, the majority of women in employment are likely to fall into the category of non-regular workers. Based on a review of the previous Japanese studies of the health of non-regular workers, we found that non-regular female workers experienced greater psychological distress, poorer self-rated health, a higher smoking rate, and less access to preventive medicine than regular workers did. However, despite the large number of non-regular workers, there are limited researches regarding their health. In contrast, several studies in Japan concluded that regular workers also had worse health conditions due to the additional responsibility and longer work hours associated with the job, housekeeping, and child rearing. The health of non-regular workers might be threatened by the effects of precarious employment status, lower income, a lower safety net, outdated social norm regarding non-regular workers, and difficulty in achieving a work-life balance. A sector wide social approach to consider life course aspect is needed to protect the health and well-being of female workers' health; promotion of an occupational health program alone is insufficient.

  18. Female non-regular workers in Japan: their current status and health

    Science.gov (United States)

    INOUE, Mariko; NISHIKITANI, Mariko; TSURUGANO, Shinobu

    2016-01-01

    The participation of women in the Japanese labor force is characterized by its M-shaped curve, which reflects decreased employment rates during child-rearing years. Although, this M-shaped curve is now improving, the majority of women in employment are likely to fall into the category of non-regular workers. Based on a review of the previous Japanese studies of the health of non-regular workers, we found that non-regular female workers experienced greater psychological distress, poorer self-rated health, a higher smoking rate, and less access to preventive medicine than regular workers did. However, despite the large number of non-regular workers, there are limited researches regarding their health. In contrast, several studies in Japan concluded that regular workers also had worse health conditions due to the additional responsibility and longer work hours associated with the job, housekeeping, and child rearing. The health of non-regular workers might be threatened by the effects of precarious employment status, lower income, a lower safety net, outdated social norm regarding non-regular workers, and difficulty in achieving a work-life balance. A sector wide social approach to consider life course aspect is needed to protect the health and well-being of female workers’ health; promotion of an occupational health program alone is insufficient. PMID:27818453

  19. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    Science.gov (United States)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  20. Compressive sensing of electrocardiogram signals by promoting sparsity on the second-order difference and by using dictionary learning.

    Science.gov (United States)

    Pant, Jeevan K; Krishnan, Sridhar

    2014-04-01

    A new algorithm for the reconstruction of electrocardiogram (ECG) signals and a dictionary learning algorithm for the enhancement of its reconstruction performance for a class of signals are proposed. The signal reconstruction algorithm is based on minimizing the lp pseudo-norm of the second-order difference, called as the lp(2d) pseudo-norm, of the signal. The optimization involved is carried out using a sequential conjugate-gradient algorithm. The dictionary learning algorithm uses an iterative procedure wherein a signal reconstruction and a dictionary update steps are repeated until a convergence criterion is satisfied. The signal reconstruction step is implemented by using the proposed signal reconstruction algorithm and the dictionary update step is implemented by using the linear least-squares method. Extensive simulation results demonstrate that the proposed algorithm yields improved reconstruction performance for temporally correlated ECG signals relative to the state-of-the-art lp(1d)-regularized least-squares and Bayesian learning based algorithms. Also for a known class of signals, the reconstruction performance of the proposed algorithm can be improved by applying it in conjunction with a dictionary obtained using the proposed dictionary learning algorithm.

  1. Stream Processing Using Grammars and Regular Expressions

    DEFF Research Database (Denmark)

    Rasmussen, Ulrik Terp

    disambiguation. The first algorithm operates in two passes in a semi-streaming fashion, using a constant amount of working memory and an auxiliary tape storage which is written in the first pass and consumed by the second. The second algorithm is a single-pass and optimally streaming algorithm which outputs...... as much of the parse tree as is semantically possible based on the input prefix read so far, and resorts to buffering as many symbols as is required to resolve the next choice. Optimality is obtained by performing a PSPACE-complete pre-analysis on the regular expression. In the second part we present...... Kleenex, a language for expressing high-performance streaming string processing programs as regular grammars with embedded semantic actions, and its compilation to streaming string transducers with worst-case linear-time performance. Its underlying theory is based on transducer decomposition into oracle...

  2. Describing chaotic attractors: Regular and perpetual points

    Science.gov (United States)

    Dudkowski, Dawid; Prasad, Awadhesh; Kapitaniak, Tomasz

    2018-03-01

    We study the concepts of regular and perpetual points for describing the behavior of chaotic attractors in dynamical systems. The idea of these points, which have been recently introduced to theoretical investigations, is thoroughly discussed and extended into new types of models. We analyze the correlation between regular and perpetual points, as well as their relation with phase space, showing the potential usefulness of both types of points in the qualitative description of co-existing states. The ability of perpetual points in finding attractors is indicated, along with its potential cause. The location of chaotic trajectories and sets of considered points is investigated and the study on the stability of systems is shown. The statistical analysis of the observing desired states is performed. We focus on various types of dynamical systems, i.e., chaotic flows with self-excited and hidden attractors, forced mechanical models, and semiconductor superlattices, exhibiting the universality of appearance of the observed patterns and relations.

  3. Chaos regularization of quantum tunneling rates

    International Nuclear Information System (INIS)

    Pecora, Louis M.; Wu Dongho; Lee, Hoshik; Antonsen, Thomas; Lee, Ming-Jer; Ott, Edward

    2011-01-01

    Quantum tunneling rates through a barrier separating two-dimensional, symmetric, double-well potentials are shown to depend on the classical dynamics of the billiard trajectories in each well and, hence, on the shape of the wells. For shapes that lead to regular (integrable) classical dynamics the tunneling rates fluctuate greatly with eigenenergies of the states sometimes by over two orders of magnitude. Contrarily, shapes that lead to completely chaotic trajectories lead to tunneling rates whose fluctuations are greatly reduced, a phenomenon we call regularization of tunneling rates. We show that a random-plane-wave theory of tunneling accounts for the mean tunneling rates and the small fluctuation variances for the chaotic systems.

  4. Least square regularized regression in sum space.

    Science.gov (United States)

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  5. Contour Propagation With Riemannian Elasticity Regularization

    DEFF Research Database (Denmark)

    Bjerre, Troels; Hansen, Mads Fogtmann; Sapru, W.

    2011-01-01

    Purpose/Objective(s): Adaptive techniques allow for correction of spatial changes during the time course of the fractionated radiotherapy. Spatial changes include tumor shrinkage and weight loss, causing tissue deformation and residual positional errors even after translational and rotational image...... the planning CT onto the rescans and correcting to reflect actual anatomical changes. For deformable registration, a free-form, multi-level, B-spline deformation model with Riemannian elasticity, penalizing non-rigid local deformations, and volumetric changes, was used. Regularization parameters was defined...... on the original delineation and tissue deformation in the time course between scans form a better starting point than rigid propagation. There was no significant difference of locally and globally defined regularization. The method used in the present study suggests that deformed contours need to be reviewed...

  6. Thin accretion disk around regular black hole

    Directory of Open Access Journals (Sweden)

    QIU Tianqi

    2014-08-01

    Full Text Available The Penrose′s cosmic censorship conjecture says that naked singularities do not exist in nature.So,it seems reasonable to further conjecture that not even a singularity exists in nature.In this paper,a regular black hole without singularity is studied in detail,especially on its thin accretion disk,energy flux,radiation temperature and accretion efficiency.It is found that the interaction of regular black hole is stronger than that of the Schwarzschild black hole. Furthermore,the thin accretion will be more efficiency to lost energy while the mass of black hole decreased. These particular properties may be used to distinguish between black holes.

  7. Convex nonnegative matrix factorization with manifold regularization.

    Science.gov (United States)

    Hu, Wenjun; Choi, Kup-Sze; Wang, Peiliang; Jiang, Yunliang; Wang, Shitong

    2015-03-01

    Nonnegative Matrix Factorization (NMF) has been extensively applied in many areas, including computer vision, pattern recognition, text mining, and signal processing. However, nonnegative entries are usually required for the data matrix in NMF, which limits its application. Besides, while the basis and encoding vectors obtained by NMF can represent the original data in low dimension, the representations do not always reflect the intrinsic geometric structure embedded in the data. Motivated by manifold learning and Convex NMF (CNMF), we propose a novel matrix factorization method called Graph Regularized and Convex Nonnegative Matrix Factorization (GCNMF) by introducing a graph regularized term into CNMF. The proposed matrix factorization technique not only inherits the intrinsic low-dimensional manifold structure, but also allows the processing of mixed-sign data matrix. Clustering experiments on nonnegative and mixed-sign real-world data sets are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. A short proof of increased parabolic regularity

    Directory of Open Access Journals (Sweden)

    Stephen Pankavich

    2015-08-01

    Full Text Available We present a short proof of the increased regularity obtained by solutions to uniformly parabolic partial differential equations. Though this setting is fairly introductory, our new method of proof, which uses a priori estimates and an inductive method, can be extended to prove analogous results for problems with time-dependent coefficients, advection-diffusion or reaction diffusion equations, and nonlinear PDEs even when other tools, such as semigroup methods or the use of explicit fundamental solutions, are unavailable.

  9. Regular black hole in three dimensions

    OpenAIRE

    Myung, Yun Soo; Yoon, Myungseok

    2008-01-01

    We find a new black hole in three dimensional anti-de Sitter space by introducing an anisotropic perfect fluid inspired by the noncommutative black hole. This is a regular black hole with two horizons. We compare thermodynamics of this black hole with that of non-rotating BTZ black hole. The first-law of thermodynamics is not compatible with the Bekenstein-Hawking entropy.

  10. Analytic stochastic regularization and gauge theories

    International Nuclear Information System (INIS)

    Abdalla, E.; Gomes, M.; Lima-Santos, A.

    1987-04-01

    We prove that analytic stochatic regularization braks gauge invariance. This is done by an explicit one loop calculation of the two three and four point vertex functions of the gluon field in scalar chromodynamics, which turns out not to be geuge invariant. We analyse the counter term structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization. (author) [pt

  11. Preconditioners for regularized saddle point matrices

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe

    2011-01-01

    Roč. 19, č. 2 (2011), s. 91-112 ISSN 1570-2820 Institutional research plan: CEZ:AV0Z30860518 Keywords : saddle point matrices * preconditioning * regularization * eigenvalue clustering Subject RIV: BA - General Mathematics Impact factor: 0.533, year: 2011 http://www.degruyter.com/view/j/jnma.2011.19.issue-2/jnum.2011.005/jnum.2011.005. xml

  12. Analytic stochastic regularization: gauge and supersymmetry theories

    International Nuclear Information System (INIS)

    Abdalla, M.C.B.

    1988-01-01

    Analytic stochastic regularization for gauge and supersymmetric theories is considered. Gauge invariance in spinor and scalar QCD is verified to brak fown by an explicit one loop computation of the two, theree and four point vertex function of the gluon field. As a result, non gauge invariant counterterms must be added. However, in the supersymmetric multiplets there is a cancellation rendering the counterterms gauge invariant. The calculation is considered at one loop order. (author) [pt

  13. Minimal length uncertainty relation and ultraviolet regularization

    Science.gov (United States)

    Kempf, Achim; Mangano, Gianpiero

    1997-06-01

    Studies in string theory and quantum gravity suggest the existence of a finite lower limit Δx0 to the possible resolution of distances, at the latest on the scale of the Planck length of 10-35 m. Within the framework of the Euclidean path integral we explicitly show ultraviolet regularization in field theory through this short distance structure. Both rotation and translation invariance can be preserved. An example is studied in detail.

  14. Regularity and chaos in cavity QED

    International Nuclear Information System (INIS)

    Bastarrachea-Magnani, Miguel Angel; López-del-Carpio, Baldemar; Chávez-Carlos, Jorge; Lerma-Hernández, Sergio; Hirsch, Jorge G

    2017-01-01

    The interaction of a quantized electromagnetic field in a cavity with a set of two-level atoms inside it can be described with algebraic Hamiltonians of increasing complexity, from the Rabi to the Dicke models. Their algebraic character allows, through the use of coherent states, a semiclassical description in phase space, where the non-integrable Dicke model has regions associated with regular and chaotic motion. The appearance of classical chaos can be quantified calculating the largest Lyapunov exponent over the whole available phase space for a given energy. In the quantum regime, employing efficient diagonalization techniques, we are able to perform a detailed quantitative study of the regular and chaotic regions, where the quantum participation ratio (P R ) of coherent states on the eigenenergy basis plays a role equivalent to the Lyapunov exponent. It is noted that, in the thermodynamic limit, dividing the participation ratio by the number of atoms leads to a positive value in chaotic regions, while it tends to zero in the regular ones. (paper)

  15. Regularizations: different recipes for identical situations

    International Nuclear Information System (INIS)

    Gambin, E.; Lobo, C.O.; Battistel, O.A.

    2004-03-01

    We present a discussion where the choice of the regularization procedure and the routing for the internal lines momenta are put at the same level of arbitrariness in the analysis of Ward identities involving simple and well-known problems in QFT. They are the complex self-interacting scalar field and two simple models where the SVV and AVV process are pertinent. We show that, in all these problems, the conditions to symmetry relations preservation are put in terms of the same combination of divergent Feynman integrals, which are evaluated in the context of a very general calculational strategy, concerning the manipulations and calculations involving divergences. Within the adopted strategy, all the arbitrariness intrinsic to the problem are still maintained in the final results and, consequently, a perfect map can be obtained with the corresponding results of the traditional regularization techniques. We show that, when we require an universal interpretation for the arbitrariness involved, in order to get consistency with all stated physical constraints, a strong condition is imposed for regularizations which automatically eliminates the ambiguities associated to the routing of the internal lines momenta of loops. The conclusion is clean and sound: the association between ambiguities and unavoidable symmetry violations in Ward identities cannot be maintained if an unique recipe is required for identical situations in the evaluation of divergent physical amplitudes. (author)

  16. Recursive regularization step for high-order lattice Boltzmann methods

    Science.gov (United States)

    Coreixas, Christophe; Wissocq, Gauthier; Puigt, Guillaume; Boussuge, Jean-François; Sagaut, Pierre

    2017-09-01

    A lattice Boltzmann method (LBM) with enhanced stability and accuracy is presented for various Hermite tensor-based lattice structures. The collision operator relies on a regularization step, which is here improved through a recursive computation of nonequilibrium Hermite polynomial coefficients. In addition to the reduced computational cost of this procedure with respect to the standard one, the recursive step allows to considerably enhance the stability and accuracy of the numerical scheme by properly filtering out second- (and higher-) order nonhydrodynamic contributions in under-resolved conditions. This is first shown in the isothermal case where the simulation of the doubly periodic shear layer is performed with a Reynolds number ranging from 104 to 106, and where a thorough analysis of the case at Re=3 ×104 is conducted. In the latter, results obtained using both regularization steps are compared against the Bhatnagar-Gross-Krook LBM for standard (D2Q9) and high-order (D2V17 and D2V37) lattice structures, confirming the tremendous increase of stability range of the proposed approach. Further comparisons on thermal and fully compressible flows, using the general extension of this procedure, are then conducted through the numerical simulation of Sod shock tubes with the D2V37 lattice. They confirm the stability increase induced by the recursive approach as compared with the standard one.

  17. The impact of comorbid cannabis and methamphetamine use on mental health among regular ecstasy users.

    Science.gov (United States)

    Scott, Laura A; Roxburgh, Amanda; Bruno, Raimondo; Matthews, Allison; Burns, Lucy

    2012-09-01

    Residual effects of ecstasy use induce neurotransmitter changes that make it biologically plausible that extended use of the drug may induce psychological distress. However, there has been only mixed support for this in the literature. The presence of polysubstance use is a confounding factor. The aim of this study was to investigate whether regular cannabis and/or regular methamphetamine use confers additional risk of poor mental health and high levels of psychological distress, beyond regular ecstasy use alone. Three years of data from a yearly, cross-sectional, quantitative survey of Australian regular ecstasy users was examined. Participants were divided into four groups according to whether they regularly (at least monthly) used ecstasy only (n=936), ecstasy and weekly cannabis (n=697), ecstasy and weekly methamphetamine (n=108) or ecstasy, weekly cannabis and weekly methamphetamine (n=180). Self-reported mental health problems and Kessler Psychological Distress Scale (K10) were examined. Approximately one-fifth of participants self-reported at least one mental health problem, most commonly depression and anxiety. The addition of regular cannabis and/or methamphetamine use substantially increases the likelihood of self-reported mental health problems, particularly with regard to paranoia, over regular ecstasy use alone. Regular cannabis use remained significantly associated with self reported mental health problems even when other differences between groups were accounted for. Regular cannabis and methamphetamine use was also associated with earlier initiation to ecstasy use. These findings suggest that patterns of drug use can help identify at risk groups that could benefit from targeted approaches in education and interventions. Given that early initiation to substance use was more common in those with regular cannabis and methamphetamine use and given that this group had a higher likelihood of mental health problems, work around delaying onset of initiation

  18. Temporal regularity of the environment drives time perception

    OpenAIRE

    van Rijn, H; Rhodes, D; Di Luca, M

    2016-01-01

    It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be var- ied. In one experiment, we asked whether the last stim...

  19. Channel Bonding in Linux Ethernet Environment using Regular Switching Hub

    Directory of Open Access Journals (Sweden)

    Chih-wen Hsueh

    2004-06-01

    Full Text Available Bandwidth plays an important role for quality of service in most network systems. There are many technologies developed to increase host bandwidth in a LAN environment. Most of them need special hardware support, such as switching hub that supports IEEE Link Aggregation standard. In this paper, we propose a Linux solution to increase the bandwidth between hosts with multiple network adapters connected to a regular switching hub. The approach is implemented as two Linux kernel modules in a LAN environment without modification to the hardware and operating systems on host machines. Packets are dispatched to bonding network adapters for transmission. The proposed approach is backward compatible, flexible and transparent to users and only one IP address is needed for multiple bonding network adapters. Evaluation experiments in TCP and UDP transmission are shown with bandwidth gain proportionally to the number of network adapters. It is suitable for large-scale LAN systems with high bandwidth requirement, such as clustering systems.

  20. Regular-, irregular-, and pseudo-character processing in Chinese: The regularity effect in normal adult readers

    Directory of Open Access Journals (Sweden)

    Dustin Kai Yan Lau

    2014-03-01

    Full Text Available Background Unlike alphabetic languages, Chinese uses a logographic script. However, the pronunciation of many character’s phonetic radical has the same pronunciation as the character as a whole. These are considered regular characters and can be read through a lexical non-semantic route (Weekes & Chen, 1999. Pseudocharacters are another way to study this non-semantic route. A pseudocharacter is the combination of existing semantic and phonetic radicals in their legal positions resulting in a non-existing character (Ho, Chan, Chung, Lee, & Tsang, 2007. Pseudocharacters can be pronounced by direct derivation from the sound of its phonetic radical. Conversely, if the pronunciation of a character does not follow that of the phonetic radical, it is considered as irregular and can only be correctly read through the lexical-semantic route. The aim of the current investigation was to examine reading aloud in normal adults. We hypothesized that the regularity effect, previously described for alphabetical scripts and acquired dyslexic patients of Chinese (Weekes & Chen, 1999; Wu, Liu, Sun, Chromik, & Zhang, 2014, would also be present in normal adult Chinese readers. Method Participants. Thirty (50% female native Hong Kong Cantonese speakers with a mean age of 19.6 years and a mean education of 12.9 years. Stimuli. Sixty regular-, 60 irregular-, and 60 pseudo-characters (with at least 75% of name agreement in Chinese were matched by initial phoneme, number of strokes and family size. Additionally, regular- and irregular-characters were matched by frequency (low and consistency. Procedure. Each participant was asked to read aloud the stimuli presented on a laptop using the DMDX software. The order of stimuli presentation was randomized. Data analysis. ANOVAs were carried out by participants and items with RTs and errors as dependent variables and type of stimuli (regular-, irregular- and pseudo-character as repeated measures (F1 or between subject

  1. Convergence and fluctuations of Regularized Tyler estimators

    KAUST Repository

    Kammoun, Abla; Couillet, Romain; Pascal, Frederic; Alouini, Mohamed-Slim

    2015-01-01

    This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.

  2. Convergence and fluctuations of Regularized Tyler estimators

    KAUST Repository

    Kammoun, Abla

    2015-10-26

    This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.

  3. A Geometric Dictionary Learning Based Approach for Fluorescence Spectroscopy Image Fusion

    OpenAIRE

    Zhiqin Zhu; Guanqiu Qi; Yi Chai; Penghua Li

    2017-01-01

    In recent years, sparse representation approaches have been integrated into multi-focus image fusion methods. The fused images of sparse-representation-based image fusion methods show great performance. Constructing an informative dictionary is a key step for sparsity-based image fusion method. In order to ensure sufficient number of useful bases for sparse representation in the process of informative dictionary construction, image patches from all source images are classified into different ...

  4. The use of regularization in inferential measurements

    International Nuclear Information System (INIS)

    Hines, J. Wesley; Gribok, Andrei V.; Attieh, Ibrahim; Uhrig, Robert E.

    1999-01-01

    Inferential sensing is the prediction of a plant variable through the use of correlated plant variables. A correct prediction of the variable can be used to monitor sensors for drift or other failures making periodic instrument calibrations unnecessary. This move from periodic to condition based maintenance can reduce costs and increase the reliability of the instrument. Having accurate, reliable measurements is important for signals that may impact safety or profitability. This paper investigates how collinearity adversely affects inferential sensing by making the results inconsistent and unrepeatable; and presents regularization as a potential solution (author) (ml)

  5. Regularization ambiguities in loop quantum gravity

    International Nuclear Information System (INIS)

    Perez, Alejandro

    2006-01-01

    One of the main achievements of loop quantum gravity is the consistent quantization of the analog of the Wheeler-DeWitt equation which is free of ultraviolet divergences. However, ambiguities associated to the intermediate regularization procedure lead to an apparently infinite set of possible theories. The absence of an UV problem--the existence of well-behaved regularization of the constraints--is intimately linked with the ambiguities arising in the quantum theory. Among these ambiguities is the one associated to the SU(2) unitary representation used in the diffeomorphism covariant 'point-splitting' regularization of the nonlinear functionals of the connection. This ambiguity is labeled by a half-integer m and, here, it is referred to as the m ambiguity. The aim of this paper is to investigate the important implications of this ambiguity. We first study 2+1 gravity (and more generally BF theory) quantized in the canonical formulation of loop quantum gravity. Only when the regularization of the quantum constraints is performed in terms of the fundamental representation of the gauge group does one obtain the usual topological quantum field theory as a result. In all other cases unphysical local degrees of freedom arise at the level of the regulated theory that conspire against the existence of the continuum limit. This shows that there is a clear-cut choice in the quantization of the constraints in 2+1 loop quantum gravity. We then analyze the effects of the ambiguity in 3+1 gravity exhibiting the existence of spurious solutions for higher representation quantizations of the Hamiltonian constraint. Although the analysis is not complete in 3+1 dimensions - due to the difficulties associated to the definition of the physical inner product - it provides evidence supporting the definitions quantum dynamics of loop quantum gravity in terms of the fundamental representation of the gauge group as the only consistent possibilities. If the gauge group is SO(3) we find

  6. Effort variation regularization in sound field reproduction

    DEFF Research Database (Denmark)

    Stefanakis, Nick; Jacobsen, Finn; Sarris, Ioannis

    2010-01-01

    In this paper, active control is used in order to reproduce a given sound field in an extended spatial region. A method is proposed which minimizes the reproduction error at a number of control positions with the reproduction sources holding a certain relation within their complex strengths......), and adaptive wave field synthesis (AWFS), both under free-field conditions and in reverberant rooms. It is shown that effort variation regularization overcomes the problems associated with small spaces and with a low ratio of direct to reverberant energy, improving thus the reproduction accuracy...

  7. New regularities in mass spectra of hadrons

    International Nuclear Information System (INIS)

    Kajdalov, A.B.

    1989-01-01

    The properties of bosonic and baryonic Regge trajectories for hadrons composed of light quarks are considered. Experimental data agree with an existence of daughter trajectories consistent with string models. It is pointed out that the parity doubling for baryonic trajectories, observed experimentally, is not understood in the existing quark models. Mass spectrum of bosons and baryons indicates to an approximate supersymmetry in the mass region M>1 GeV. These regularities indicates to a high degree of symmetry for the dynamics in the confinement region. 8 refs.; 5 figs

  8. Bayesian regularization of diffusion tensor images

    DEFF Research Database (Denmark)

    Frandsen, Jesper; Hobolth, Asger; Østergaard, Leif

    2007-01-01

    Diffusion tensor imaging (DTI) is a powerful tool in the study of the course of nerve fibre bundles in the human brain. Using DTI, the local fibre orientation in each image voxel can be described by a diffusion tensor which is constructed from local measurements of diffusion coefficients along...... several directions. The measured diffusion coefficients and thereby the diffusion tensors are subject to noise, leading to possibly flawed representations of the three dimensional fibre bundles. In this paper we develop a Bayesian procedure for regularizing the diffusion tensor field, fully utilizing...

  9. Indefinite metric and regularization of electrodynamics

    International Nuclear Information System (INIS)

    Gaudin, M.

    1984-06-01

    The invariant regularization of Pauli and Villars in quantum electrodynamics can be considered as deriving from a local and causal lagrangian theory for spin 1/2 bosons, by introducing an indefinite metric and a condition on the allowed states similar to the Lorentz condition. The consequences are the asymptotic freedom of the photon's propagator. We present a calcultion of the effective charge to the fourth order in the coupling as a function of the auxiliary masses, the theory avoiding all mass divergencies to this order [fr

  10. Strategies for regular segmented reductions on GPU

    DEFF Research Database (Denmark)

    Larsen, Rasmus Wriedt; Henriksen, Troels

    2017-01-01

    We present and evaluate an implementation technique for regular segmented reductions on GPUs. Existing techniques tend to be either consistent in performance but relatively inefficient in absolute terms, or optimised for specific workloads and thereby exhibiting bad performance for certain input...... is in the context of the Futhark compiler, the implementation technique is applicable to any library or language that has a need for segmented reductions. We evaluate the technique on four microbenchmarks, two of which we also compare to implementations in the CUB library for GPU programming, as well as on two...

  11. Two-Way Regularized Fuzzy Clustering of Multiple Correspondence Analysis.

    Science.gov (United States)

    Kim, Sunmee; Choi, Ji Yeh; Hwang, Heungsun

    2017-01-01

    Multiple correspondence analysis (MCA) is a useful tool for investigating the interrelationships among dummy-coded categorical variables. MCA has been combined with clustering methods to examine whether there exist heterogeneous subclusters of a population, which exhibit cluster-level heterogeneity. These combined approaches aim to classify either observations only (one-way clustering of MCA) or both observations and variable categories (two-way clustering of MCA). The latter approach is favored because its solutions are easier to interpret by providing explicitly which subgroup of observations is associated with which subset of variable categories. Nonetheless, the two-way approach has been built on hard classification that assumes observations and/or variable categories to belong to only one cluster. To relax this assumption, we propose two-way fuzzy clustering of MCA. Specifically, we combine MCA with fuzzy k-means simultaneously to classify a subgroup of observations and a subset of variable categories into a common cluster, while allowing both observations and variable categories to belong partially to multiple clusters. Importantly, we adopt regularized fuzzy k-means, thereby enabling us to decide the degree of fuzziness in cluster memberships automatically. We evaluate the performance of the proposed approach through the analysis of simulated and real data, in comparison with existing two-way clustering approaches.

  12. Regular Network Class Features Enhancement Using an Evolutionary Synthesis Algorithm

    Directory of Open Access Journals (Sweden)

    O. G. Monahov

    2014-01-01

    Full Text Available This paper investigates a solution of the optimization problem concerning the construction of diameter-optimal regular networks (graphs. Regular networks are of practical interest as the graph-theoretical models of reliable communication networks of parallel supercomputer systems, as a basis of the structure in a model of small world in optical and neural networks. It presents a new class of parametrically described regular networks - hypercirculant networks (graphs. An approach that uses evolutionary algorithms for the automatic generation of parametric descriptions of optimal hypercirculant networks is developed. Synthesis of optimal hypercirculant networks is based on the optimal circulant networks with smaller degree of nodes. To construct optimal hypercirculant networks is used a template of circulant network from the known optimal families of circulant networks with desired number of nodes and with smaller degree of nodes. Thus, a generating set of the circulant network is used as a generating subset of the hypercirculant network, and the missing generators are synthesized by means of the evolutionary algorithm, which is carrying out minimization of diameter (average diameter of networks. A comparative analysis of the structural characteristics of hypercirculant, toroidal, and circulant networks is conducted. The advantage hypercirculant networks under such structural characteristics, as diameter, average diameter, and the width of bisection, with comparable costs of the number of nodes and the number of connections is demonstrated. It should be noted the advantage of hypercirculant networks of dimension three over four higher-dimensional tori. Thus, the optimization of hypercirculant networks of dimension three is more efficient than the introduction of an additional dimension for the corresponding toroidal structures. The paper also notes the best structural parameters of hypercirculant networks in comparison with iBT-networks previously

  13. Regularized Biot-Savart Laws for Modeling Magnetic Flux Ropes

    Science.gov (United States)

    Titov, Viacheslav; Downs, Cooper; Mikic, Zoran; Torok, Tibor; Linker, Jon A.

    2017-08-01

    Many existing models assume that magnetic flux ropes play a key role in solar flares and coronal mass ejections (CMEs). It is therefore important to develop efficient methods for constructing flux-rope configurations constrained by observed magnetic data and the initial morphology of CMEs. As our new step in this direction, we have derived and implemented a compact analytical form that represents the magnetic field of a thin flux rope with an axis of arbitrary shape and a circular cross-section. This form implies that the flux rope carries axial current I and axial flux F, so that the respective magnetic field is a curl of the sum of toroidal and poloidal vector potentials proportional to I and F, respectively. The vector potentials are expressed in terms of Biot-Savart laws whose kernels are regularized at the rope axis. We regularized them in such a way that for a straight-line axis the form provides a cylindrical force-free flux rope with a parabolic profile of the axial current density. So far, we set the shape of the rope axis by tracking the polarity inversion lines of observed magnetograms and estimating its height and other parameters of the rope from a calculated potential field above these lines. In spite of this heuristic approach, we were able to successfully construct pre-eruption configurations for the 2009 February13 and 2011 October 1 CME events. These applications demonstrate that our regularized Biot-Savart laws are indeed a very flexible and efficient method for energizing initial configurations in MHD simulations of CMEs. We discuss possible ways of optimizing the axis paths and other extensions of the method in order to make it more useful and robust.Research supported by NSF, NASA's HSR and LWS Programs, and AFOSR.

  14. Group-regularized individual prediction: theory and application to pain.

    Science.gov (United States)

    Lindquist, Martin A; Krishnan, Anjali; López-Solà, Marina; Jepma, Marieke; Woo, Choong-Wan; Koban, Leonie; Roy, Mathieu; Atlas, Lauren Y; Schmidt, Liane; Chang, Luke J; Reynolds Losin, Elizabeth A; Eisenbarth, Hedwig; Ashar, Yoni K; Delk, Elizabeth; Wager, Tor D

    2017-01-15

    Multivariate pattern analysis (MVPA) has become an important tool for identifying brain representations of psychological processes and clinical outcomes using fMRI and related methods. Such methods can be used to predict or 'decode' psychological states in individual subjects. Single-subject MVPA approaches, however, are limited by the amount and quality of individual-subject data. In spite of higher spatial resolution, predictive accuracy from single-subject data often does not exceed what can be accomplished using coarser, group-level maps, because single-subject patterns are trained on limited amounts of often-noisy data. Here, we present a method that combines population-level priors, in the form of biomarker patterns developed on prior samples, with single-subject MVPA maps to improve single-subject prediction. Theoretical results and simulations motivate a weighting based on the relative variances of biomarker-based prediction-based on population-level predictive maps from prior groups-and individual-subject, cross-validated prediction. Empirical results predicting pain using brain activity on a trial-by-trial basis (single-trial prediction) across 6 studies (N=180 participants) confirm the theoretical predictions. Regularization based on a population-level biomarker-in this case, the Neurologic Pain Signature (NPS)-improved single-subject prediction accuracy compared with idiographic maps based on the individuals' data alone. The regularization scheme that we propose, which we term group-regularized individual prediction (GRIP), can be applied broadly to within-person MVPA-based prediction. We also show how GRIP can be used to evaluate data quality and provide benchmarks for the appropriateness of population-level maps like the NPS for a given individual or study. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Emotion regulation deficits in regular marijuana users.

    Science.gov (United States)

    Zimmermann, Kaeli; Walz, Christina; Derckx, Raissa T; Kendrick, Keith M; Weber, Bernd; Dore, Bruce; Ochsner, Kevin N; Hurlemann, René; Becker, Benjamin

    2017-08-01

    Effective regulation of negative affective states has been associated with mental health. Impaired regulation of negative affect represents a risk factor for dysfunctional coping mechanisms such as drug use and thus could contribute to the initiation and development of problematic substance use. This study investigated behavioral and neural indices of emotion regulation in regular marijuana users (n = 23) and demographically matched nonusing controls (n = 20) by means of an fMRI cognitive emotion regulation (reappraisal) paradigm. Relative to nonusing controls, marijuana users demonstrated increased neural activity in a bilateral frontal network comprising precentral, middle cingulate, and supplementary motor regions during reappraisal of negative affect (P marijuana users relative to controls. Together, the present findings could reflect an unsuccessful attempt of compensatory recruitment of additional neural resources in the context of disrupted amygdala-prefrontal interaction during volitional emotion regulation in marijuana users. As such, impaired volitional regulation of negative affect might represent a consequence of, or risk factor for, regular marijuana use. Hum Brain Mapp 38:4270-4279, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  16. Efficient multidimensional regularization for Volterra series estimation

    Science.gov (United States)

    Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan

    2018-05-01

    This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.

  17. Supporting Regularized Logistic Regression Privately and Efficiently

    Science.gov (United States)

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738

  18. Supporting Regularized Logistic Regression Privately and Efficiently.

    Science.gov (United States)

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.

  19. Multiple graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan

    2013-10-01

    Non-negative matrix factorization (NMF) has been widely used as a data representation method based on components. To overcome the disadvantage of NMF in failing to consider the manifold structure of a data set, graph regularized NMF (GrNMF) has been proposed by Cai et al. by constructing an affinity graph and searching for a matrix factorization that respects graph structure. Selecting a graph model and its corresponding parameters is critical for this strategy. This process is usually carried out by cross-validation or discrete grid search, which are time consuming and prone to overfitting. In this paper, we propose a GrNMF, called MultiGrNMF, in which the intrinsic manifold is approximated by a linear combination of several graphs with different models and parameters inspired by ensemble manifold regularization. Factorization metrics and linear combination coefficients of graphs are determined simultaneously within a unified object function. They are alternately optimized in an iterative algorithm, thus resulting in a novel data representation algorithm. Extensive experiments on a protein subcellular localization task and an Alzheimer\\'s disease diagnosis task demonstrate the effectiveness of the proposed algorithm. © 2013 Elsevier Ltd. All rights reserved.

  20. Accelerating Large Data Analysis By Exploiting Regularities

    Science.gov (United States)

    Moran, Patrick J.; Ellsworth, David

    2003-01-01

    We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.

  1. Supporting Regularized Logistic Regression Privately and Efficiently.

    Directory of Open Access Journals (Sweden)

    Wenfa Li

    Full Text Available As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.

  2. Multiview Hessian regularization for image annotation.

    Science.gov (United States)

    Liu, Weifeng; Tao, Dacheng

    2013-07-01

    The rapid development of computer hardware and Internet technology makes large scale data dependent models computationally tractable, and opens a bright avenue for annotating images through innovative machine learning algorithms. Semisupervised learning (SSL) therefore received intensive attention in recent years and was successfully deployed in image annotation. One representative work in SSL is Laplacian regularization (LR), which smoothes the conditional distribution for classification along the manifold encoded in the graph Laplacian, however, it is observed that LR biases the classification function toward a constant function that possibly results in poor generalization. In addition, LR is developed to handle uniformly distributed data (or single-view data), although instances or objects, such as images and videos, are usually represented by multiview features, such as color, shape, and texture. In this paper, we present multiview Hessian regularization (mHR) to address the above two problems in LR-based image annotation. In particular, mHR optimally combines multiple HR, each of which is obtained from a particular view of instances, and steers the classification function that varies linearly along the data manifold. We apply mHR to kernel least squares and support vector machines as two examples for image annotation. Extensive experiments on the PASCAL VOC'07 dataset validate the effectiveness of mHR by comparing it with baseline algorithms, including LR and HR.

  3. Access to serviced land for the urban poor: the regularization paradox in Mexico

    Directory of Open Access Journals (Sweden)

    Alfonso Iracheta Cenecorta

    2000-01-01

    Full Text Available The insufficient supply of serviced land at affordable prices for the urban poor and the need for regularization of the consequent illegal occupations in urban areas are two of the most important issues on the Latin American land policy agenda. Taking a structural/integrated view on the functioning of the urban land market in Latin America, this paper discusses the nexus between the formal and the informal land markets. It thus exposes the perverse feedback effects that curative regularization policies may have on the process by which irregularity is produced in the first place. The paper suggests that a more effective approach to the provision of serviced land for the poor cannot be resolved within the prevailing (curative regularization programs. These programs should have the capacity to mobilize the resources that do exist into a comprehensive program that links regularization with fiscal policy, including the exploration of value capture mechanisms.

  4. Accretion onto some well-known regular black holes

    International Nuclear Information System (INIS)

    Jawad, Abdul; Shahzad, M.U.

    2016-01-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  5. Accretion onto some well-known regular black holes

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul; Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan)

    2016-03-15

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  6. Accretion onto some well-known regular black holes

    Science.gov (United States)

    Jawad, Abdul; Shahzad, M. Umair

    2016-03-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.

  7. Regularization iteration imaging algorithm for electrical capacitance tomography

    Science.gov (United States)

    Tong, Guowei; Liu, Shi; Chen, Hongyan; Wang, Xueyao

    2018-03-01

    The image reconstruction method plays a crucial role in real-world applications of the electrical capacitance tomography technique. In this study, a new cost function that simultaneously considers the sparsity and low-rank properties of the imaging targets is proposed to improve the quality of the reconstruction images, in which the image reconstruction task is converted into an optimization problem. Within the framework of the split Bregman algorithm, an iterative scheme that splits a complicated optimization problem into several simpler sub-tasks is developed to solve the proposed cost function efficiently, in which the fast-iterative shrinkage thresholding algorithm is introduced to accelerate the convergence. Numerical experiment results verify the effectiveness of the proposed algorithm in improving the reconstruction precision and robustness.

  8. Improved resolution and reliability in dynamic PET using Bayesian regularization of MRTM2

    DEFF Research Database (Denmark)

    Agn, Mikael; Svarer, Claus; Frokjaer, Vibe G.

    2014-01-01

    This paper presents a mathematical model that regularizes dynamic PET data by using a Bayesian framework. We base the model on the well known two-parameter multilinear reference tissue method MRTM2 and regularize on the assumption that spatially close regions have similar parameters. The developed...... model is compared to the conventional approach of improving the low signal-to-noise ratio of PET data, i.e., spatial filtering of each time frame independently by a Gaussian kernel. We show that the model handles high levels of noise better than the conventional approach, while at the same time...

  9. Stark widths regularities within spectral series of sodium isoelectronic sequence

    Science.gov (United States)

    Trklja, Nora; Tapalaga, Irinel; Dojčinović, Ivan P.; Purić, Jagoš

    2018-02-01

    Stark widths within spectral series of sodium isoelectronic sequence have been studied. This is a unique approach that includes both neutrals and ions. Two levels of problem are considered: if the required atomic parameters are known, Stark widths can be calculated by some of the known methods (in present paper modified semiempirical formula has been used), but if there is a lack of parameters, regularities enable determination of Stark broadening data. In the framework of regularity research, Stark broadening dependence on environmental conditions and certain atomic parameters has been investigated. The aim of this work is to give a simple model, with minimum of required parameters, which can be used for calculation of Stark broadening data for any chosen transitions within sodium like emitters. Obtained relations were used for predictions of Stark widths for transitions that have not been measured or calculated yet. This system enables fast data processing by using of proposed theoretical model and it provides quality control and verification of obtained results.

  10. Regularized image denoising based on spectral gradient optimization

    International Nuclear Information System (INIS)

    Lukić, Tibor; Lindblad, Joakim; Sladoje, Nataša

    2011-01-01

    Image restoration methods, such as denoising, deblurring, inpainting, etc, are often based on the minimization of an appropriately defined energy function. We consider energy functions for image denoising which combine a quadratic data-fidelity term and a regularization term, where the properties of the latter are determined by a used potential function. Many potential functions are suggested for different purposes in the literature. We compare the denoising performance achieved by ten different potential functions. Several methods for efficient minimization of regularized energy functions exist. Most are only applicable to particular choices of potential functions, however. To enable a comparison of all the observed potential functions, we propose to minimize the objective function using a spectral gradient approach; spectral gradient methods put very weak restrictions on the used potential function. We present and evaluate the performance of one spectral conjugate gradient and one cyclic spectral gradient algorithm, and conclude from experiments that both are well suited for the task. We compare the performance with three total variation-based state-of-the-art methods for image denoising. From the empirical evaluation, we conclude that denoising using the Huber potential (for images degraded by higher levels of noise; signal-to-noise ratio below 10 dB) and the Geman and McClure potential (for less noisy images), in combination with the spectral conjugate gradient minimization algorithm, shows the overall best performance

  11. Properties of regular polygons of coupled microring resonators.

    Science.gov (United States)

    Chremmos, Ioannis; Uzunoglu, Nikolaos

    2007-11-01

    The resonant properties of a closed and symmetric cyclic array of N coupled microring resonators (coupled-microring resonator regular N-gon) are for the first time determined analytically by applying the transfer matrix approach and Floquet theorem for periodic propagation in cylindrically symmetric structures. By solving the corresponding eigenvalue problem with the field amplitudes in the rings as eigenvectors, it is shown that, for even or odd N, this photonic molecule possesses 1 + N/2 or 1+N resonant frequencies, respectively. The condition for resonances is found to be identical to the familiar dispersion equation of the infinite coupled-microring resonator waveguide with a discrete wave vector. This result reveals the so far latent connection between the two optical structures and is based on the fact that, for a regular polygon, the field transfer matrix over two successive rings is independent of the polygon vertex angle. The properties of the resonant modes are discussed in detail using the illustration of Brillouin band diagrams. Finally, the practical application of a channel-dropping filter based on polygons with an even number of rings is also analyzed.

  12. Regularized inversion of controlled source and earthquake data

    International Nuclear Information System (INIS)

    Ramachandran, Kumar

    2012-01-01

    Estimation of the seismic velocity structure of the Earth's crust and upper mantle from travel-time data has advanced greatly in recent years. Forward modelling trial-and-error methods have been superseded by tomographic methods which allow more objective analysis of large two-dimensional and three-dimensional refraction and/or reflection data sets. The fundamental purpose of travel-time tomography is to determine the velocity structure of a medium by analysing the time it takes for a wave generated at a source point within the medium to arrive at a distribution of receiver points. Tomographic inversion of first-arrival travel-time data is a nonlinear problem since both the velocity of the medium and ray paths in the medium are unknown. The solution for such a problem is typically obtained by repeated application of linearized inversion. Regularization of the nonlinear problem reduces the ill posedness inherent in the tomographic inversion due to the under-determined nature of the problem and the inconsistencies in the observed data. This paper discusses the theory of regularized inversion for joint inversion of controlled source and earthquake data, and results from synthetic data testing and application to real data. The results obtained from tomographic inversion of synthetic data and real data from the northern Cascadia subduction zone show that the velocity model and hypocentral parameters can be efficiently estimated using this approach. (paper)

  13. A general framework for regularized, similarity-based image restoration.

    Science.gov (United States)

    Kheradmand, Amin; Milanfar, Peyman

    2014-12-01

    Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.

  14. Laplacian embedded regression for scalable manifold regularization.

    Science.gov (United States)

    Chen, Lin; Tsang, Ivor W; Xu, Dong

    2012-06-01

    Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real

  15. Regularities of radiorace formation in yeasts

    International Nuclear Information System (INIS)

    Korogodin, V.I.; Bliznik, K.M.; Kapul'tsevich, Yu.G.; Petin, V.G.; Akademiya Meditsinskikh Nauk SSSR, Obninsk. Nauchno-Issledovatel'skij Inst. Meditsinskoj Radiologii)

    1977-01-01

    Two strains of diploid yeast, namely, Saccharomyces ellipsoides, Megri 139-B, isolated under natural conditions, and Saccharomyces cerevisiae 5a x 3Bα, heterozygous by genes ade 1 and ade 2, were exposed to γ-quanta of Co 60 . The content of cells-saltants forming colonies with changed morphology, that of the nonviable cells, cells that are respiration mutants, and cells-recombinants by gene ade 1 and ade 2, has been determined. A certain regularity has been revealed in the distribution among the colonies of cells of the four types mentioned above: the higher the content of cells of some one of the types, the higher that of the cells having other hereditary changes

  16. Regularization destriping of remote sensing imagery

    Science.gov (United States)

    Basnayake, Ranil; Bollt, Erik; Tufillaro, Nicholas; Sun, Jie; Gierach, Michelle

    2017-07-01

    We illustrate the utility of variational destriping for ocean color images from both multispectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Population Laboratory's (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The target functional penalizes the neighborhood of stripes (strictly, directionally uniform features) while promoting data fidelity, and the functional is minimized by solving the Euler-Lagrange equations with an explicit finite-difference scheme. We show the accuracy of our method from a benchmark data set which represents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.

  17. Singular tachyon kinks from regular profiles

    International Nuclear Information System (INIS)

    Copeland, E.J.; Saffin, P.M.; Steer, D.A.

    2003-01-01

    We demonstrate how Sen's singular kink solution of the Born-Infeld tachyon action can be constructed by taking the appropriate limit of initially regular profiles. It is shown that the order in which different limits are taken plays an important role in determining whether or not such a solution is obtained for a wide class of potentials. Indeed, by introducing a small parameter into the action, we are able circumvent the results of a recent paper which derived two conditions on the asymptotic tachyon potential such that the singular kink could be recovered in the large amplitude limit of periodic solutions. We show that this is explained by the non-commuting nature of two limits, and that Sen's solution is recovered if the order of the limits is chosen appropriately

  18. Two-pass greedy regular expression parsing

    DEFF Research Database (Denmark)

    Grathwohl, Niels Bjørn Bugge; Henglein, Fritz; Nielsen, Lasse

    2013-01-01

    We present new algorithms for producing greedy parses for regular expressions (REs) in a semi-streaming fashion. Our lean-log algorithm executes in time O(mn) for REs of size m and input strings of size n and outputs a compact bit-coded parse tree representation. It improves on previous algorithms...... by: operating in only 2 passes; using only O(m) words of random-access memory (independent of n); requiring only kn bits of sequentially written and read log storage, where k ... and not requiring it to be stored at all. Previous RE parsing algorithms do not scale linearly with input size, or require substantially more log storage and employ 3 passes where the first consists of reversing the input, or do not or are not known to produce a greedy parse. The performance of our unoptimized C...

  19. Discriminative Elastic-Net Regularized Linear Regression.

    Science.gov (United States)

    Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen

    2017-03-01

    In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.

  20. Regularization of Instantaneous Frequency Attribute Computations

    Science.gov (United States)

    Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.

    2014-12-01

    We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.